Method and device for superresolution optical measurement using singular optics
11598630 · 2023-03-07
Assignee
Inventors
Cpc classification
G02B21/0072
PHYSICS
G01N21/6428
PHYSICS
G01B11/14
PHYSICS
G02B21/0056
PHYSICS
G02B21/16
PHYSICS
International classification
G01B11/14
PHYSICS
G02B21/16
PHYSICS
G01B11/25
PHYSICS
Abstract
An optical method of measurement and an optical apparatus for determining the spatial position of at least one luminous object on a sample. A sequence of at least two compact luminous distributions of different topological families is projected onto the sample, and light re-emitted by the at least one luminous object is detected. At least one optical image is generated for each luminous distribution on the basis of the light detected. The optical images are analyzed to obtain spatiotemporal information regarding the light re-emitted by the at least one luminous object, or location of the at least one luminous object.
Claims
1. A method of optical measurement for determining the spatial position of at least one luminous object in a sample, the method comprising: projecting onto the sample a sequence of a first compact luminous distribution and a second compact luminous distribution, wherein the first and second compact luminous distributions are of different topological families; formatting, in a manner selected from the group consisting of static and dynamic, the projected sequence of the first and second compact luminous distributions to provide a polarization state configured to mitigate vector effects; detecting light re-emitted by said at least one luminous object of the sample; generating from the detected light a first image of the at least one luminous object as illuminated by the first compact luminous distribution and a second image of the at least one luminous object as illuminated by the second compact luminous distribution; and algorithmically analyzing images to obtain spatial position information of said at least one luminous object.
2. A method according to claim 1, wherein formatting the projected sequence includes formatting the projected sequence to provide the polarization state in a form that is static and rotationally symmetrical.
3. A method according to claim 2, wherein formatting the projected sequence includes formatting the projected sequence to provide the polarization state in a form that is circular.
4. A method according to claim 1, wherein formatting the projected sequence includes formatting the projected sequence to provide the polarization state in a form that is radial.
5. A method according to claim 1, wherein formatting the projected sequence includes formatting the projected sequence to provide the polarization state in a form that is azimuthal.
6. A method according to claim 1, wherein formatting the projected sequence includes formatting the projected sequence to provide the polarization state in a form that is dynamic.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The invention will now be described in connection with certain preferred embodiments with reference to the following illustrative figures so that it can be better understood.
(2) With specific reference now to the figures in detail, it is emphasized that the indications represented are presented as an example and for purposes of illustrative discussion of the preferred embodiments of the invention and are presented only in order to provide what is considered to be the description of the most useful and easy to understand principles and conceptual aspects of the invention.
(3) In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
(4) In the drawings:
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16) In all the figures, like reference numerals identify like parts.
DEFINITIONS AND TECHNICAL SUPPLEMENTS
(17) The usual definitions are used for the description: phase and polarization, polarimetry, Stokes parameters and measurement techniques Stokes parameters.
(18) The center or centroid of a light distribution is the center of gravity of the intensity. The diameter of a light distribution is the diameter of the first zero intensity, both for regular and singular waves, without taking into account the central zero of a singular wave. Two light distributions are collocated if their centers coincide or are separated by a fixed, predetermined spatial value.
(19) In this paper we use the emission wavelength, as the basic metric system.
(20) In this paper, the usual definitions are used for the following optical components: lens whose definition has been broadened to include all optical means which transmit, refract or reflect the light, auxiliary optics—optical sub-module to interface and adjust either the geometric parameters or the parameters of phase and/or polarization between two other optical sub-modules or modules-, polarizer, analyzer, retardation plate, beamsplitter, polarizing and non-polarizing, beam combiner, polarizing and non-polarizing.
(21) We refer to a partial polarizer to describe a component or a module whose absorption is different for the two linear polarizations—linear dichroism—or for the two circular polarizations—circular dichroism.
(22) We refer to dynamic sub-modules of polarization or phase, to describe the optical means, which polarization or phase vary over time in a controlled manner, discrete or continuous.
(23) These dynamic polarization or phase sub-modules include, but are not limited to: rotating on their axes wave plate, light valves based on liquid crystal technology, electro-optical devices, also known as Pockels cells, Kerr cells, electro-optical resonant devices, magneto-optic devices, also known as cells Faraday, acousto-optic or elasto- or any combination of these means.
(24) We refer to “centroid algorithm” to describe the standard procedure for measuring the centroid and possibly the width (FWHM—Full width Half Maximum) of a light distribution.
(25) Many articles have been published on this algorithm such as the article Lindegren in 1978 (“Photoelectric astrometry—A comparison of methods for precise picture location,” in Modern Astrometry, Proceedings of the Colloquium, Vienna, Austria, Sep. 11-14, 1978, 197-217 (1978)).
(26) In this paper, the usual definitions are used for following optoelectronic components: photoelectric detector, CCD, EMCCD, CMOS SPAD—Single Photon Avalanche Diode and SPAD matrix.
(27) We use the following terms: optical image for the spatial distribution of light intensity, electronic image to describe the spatial distribution of charges of a CCD, of current for a CMOS, of events or for a SPAD, created by the optical image, at a given moment, in a detection plane, digital image to describe a matrix of numbers created by conversion of the electronic image.
(28) To simplify the reading and understanding of the text we will use the term image to the output of a single pixel detector such as PMT or SPAD, considering it as an image consisting of a single pixel.
(29) Where no ambiguity exists, or where the distinction between the three types of images is not necessary, we will use the simplified generic term of image.
(30) The images described in this document may be characterized as microimages, images of size substantially equal to a small number of the Airy disc diameters, typically less than 5 diameters, and/or low number of pixels, typically 4*4 to 32*32.
(31) In a digital image Aj, the indices m and n represent the indices of the pixels, and the origin of the pixels will be selected as the projection of the center of the analysis volume defined in a later paragraph.
(32) We presented the images using the terminology used for matrix detectors, such as CCD, EMCCD and CMOS. For SPAD and SPAD arrays the measurement result is an ordered list in time of photons impact detailing, for each photon, the time of impact and the position of the impact. To simplify the presentation of this document, we will include this case in our definition of images.
(33) Polarimetry and Stokes Vector
(34) Polarimetry refers to the measurement of the polarization state of incident light.
(35) The polarization state of the incident light can be described by the Stokes parameters, a set of values introduced by George Gabriel Stokes in 1852 and used in optics.
(36) Additional technical information known to the skilled in the art
(37) In this chapter we take a set of technical elements necessary for the description of the invention and known to those skilled in the art.
(38) Cartesian and Polar Coordinates
(39) The polar coordinates of a point, ρ, θ are deduced from the Cartesian coordinates x, y using the equation:
(40)
(41) Electric Field in Polar Coordinates and Angular Modes
(42) Given a complex electric field vector, E(ρ,θ), described in polar coordinates (ρ, θ), the electric field can be represented by an real amplitude, A(ρ,θ), an real phase φ(ρ,θ) and a unit vector of polarization, u(r,θ):
E(ρ,θ)=A(ρ,θ)exp[iφ(ρ,θ)]u(ρ,θ) (EQ. 2)
(43) It is customary in Optics to decompose the field components, i.e. its amplitude, phase and polarization in orthogonal modes, Cartesian or polar.
(44) Many decompositions in orthogonal polar modes, such as Gaussian, Hermite-Gaussian and Laguerre-Gaussian modes are known to those skilled in the art.
(45) We mainly use in this paper, the decomposition of the amplitude of the electric field in Hypergeometric-Gaussian modes, HyGG, with the following form:
A(ρ,θ)∝ρ.sup.p+|m|exp(—ρ.sup.2+ilθ) (EQ. 3)
(46) In this decomposition, ρ is the radial mode and l is the azimuthal order.
(47) Singular Waves
(48) A singular wave includes a null intensity at the center and an azimuthal phase variation of a multiple of 2π. This research topic in optics, initiated by the seminal article by J F Nye and M. Berry in 1974, is now known as “singular optics.” Examples of regular and singular waves are presented in the following.
(49) Topology and Compact Light Distributions
(50) A singular wave includes a null intensity at the center and an azimuthal phase variation of a multiple of 2.sub.te. This research topic in optics, initiated by the seminal article by J F Nye, et al. (“Dislocations in Wave Trains,” Proceedings of the Royal Society of London, Series A, Mathematical and Physical Sciences (1934-1990) 336, 165-190(1974)).
(51) We distinguish different families of point light distributions, of different topologies: Regular distributions in their usual definition in Optics, Singular distributions, otherwise known as optical vortices, of topological charge (azimuthal order) i, where the phase varies from 0 to 2πi around the direction of propagation, i being an integer, Amplitude distributions with azimuthal variation of order i, also referred to as Laguerre-Gaussian distribution, Polarization, and optionally phase distributions, with azimuthal variation of order I, referred to as radially polarized Laguerre-Gauss modes.
(52) Two compact light distributions will be deemed being of different topological families if they meet at least one, and any of the following conditions:
(53) One is regular and the other is singular, One is point-source and the other is a ring-source Azimuthal orders of the amplitude of the two different light distributions differ, —Azimuthal orders of the polarization or the phase of the two different light distributions differ.
(54) Alternatively, two light distributions projected onto a given volume will be considered of different topologies if a significant portion of the surface illuminated together, the gradients are of reversed direction.
(55) The fluorophores are the best known example of the family of point-source, light nanoemitters of size substantially smaller than the diffraction limit. A nanoemitter is a small secondary light emitter attached to an object, and it is significantly smaller than a fraction of a wavelength, typically but not limited to a size smaller than one fifth of the wavelength; a light nanoemitter absorbs the incident energy and re-emits light at the same wavelength as the incident light or different wavelengths; the light emitted by the nanoemitter may be coherent, partially coherent or incoherent with the absorbed light. The main examples of nanoemitters are fluorophores and nanoparticles, but also include many other elements.
(56) The definition in the context of the invention of nanoemitters light is determined by the following two conditions:
(57) Creating a secondary point-source light emitter, and Pre-determined positioning of the emitter with respect to a biological or organic entity.
(58) The physical mechanisms that can create a nanoemitter are numerous, and include but are not limited to absorption, scattering or reflection, fluorescence, emission-depletion, photo activation phenomena, fluorescence of two or more photons, or non-elastic scattering, Raman scattering, or any other physical mechanisms known to those skilled in the art. We use the term light emission to describe the emission of electromagnetic waves by a light nanoemitter, the light being coherent, incoherent or partially coherent.
(59) The physical mechanisms that can create a nanoemitter are numerous, and include but are not limited to absorption, scattering or reflection, fluorescence, emission-depletion (S W Hell, et al., “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Optics Letters 19, 780-782 (1994)), photo activation phenomena (M J Rust, et al., “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” NatMeth, 3, 793-796 (2006)), and E. Betzig, et al. (“Imaging intracellular fluorescent proteins at nanometer resolution,” Science, 313, 1642 (2006)), fluorescence of two or more photons, W. Denk, et al. (“Two-photon laser microscopy,” (Google Patents, 1991)), or non-elastic scattering, Raman scattering, or any other physical mechanisms known to those skilled in the art. We use the term light emission to describe the emission of electromagnetic waves by a light nanoemitter, the light being coherent, incoherent or partially coherent.
(60) We refer to in this patent to descriptors of a single fluorophore to denote the set of information describing a fluorophore as a point source at a given moment. Since the nanoemitter is considered as a point source, all the information representing it contains a limited number of parameters, namely: its position in space, its intensity, its spectral characteristics of the intensity, coherence, phase and polarization of the light emitted by the fluorophore as a function of the incident light.
(61) However, in most cases, and in the description of the invention, we refer, under the designation of descriptors, a subset of descriptors of a fluorophore including its geometric position, its intensity, and the type of fluorophore, whether several populations of light nanoemitters, differentiated for example by their emission spectrum, are present in the same sample. This simplification used in the description does not alter the scope of the invention which will include in its scope all the descriptors of light nanoemitters.
(62) To simplify the understanding of the context of the invention, the following description refers only the simplest case, one in which the nanoemitter is a fluorophore and physical interaction is the one photon fluorescence. However, this description should be understood as a simplified illustration of a general description of the methods and concepts applicable to all light nanoemitters mentioned previously or known to those skilled in the art, regardless of the underlying physical phenomenon.
(63) It is striking that the nanoemitter samples the incident light intensity field at a three-dimensional position accurately without influence of the complete spatial distribution of the incident intensity.
(64) We will reference this remarkable property in this document as the sampling ability of light nanoemitter.
(65) We refer again to the
(66) We refer again to the
(67) The luminous biological object contains information that is relevant to the biological object, mainly spatiotemporal information, the object position and orientation with respect to time, and morphological information, for example in the case of division of a cell in two.
(68) The primordial Information, the map in the terminology of general semantics, is the set of descriptors fluorophores and their evolution over time. Biological and geometric information will only be extrapolations of this primordial information.
(69) The measurement system will calculate an evaluation of the descriptors of the fluorophores, the measured map. This measured map differs from the original map, due to noise, measurement conditions, the system limits or measurement uncertainty. This information map can be developed later into different levels of abstraction.
(70) The map, the basic level, therefore, comprises an evaluation of a set of descriptors of fluorophores, and this information may, for example, be structured as a list of fluorophores and their descriptors. This level of abstraction, which presents the results of direct measurement, contains a priori no biological information but is the results of a physical measurement described by points of light, which could also represent any marked entity.
(71) The second level, the geometric level of abstraction, structures nanoemitters in the form of geometric objects. It comprises a description of luminous objects and their dynamic characteristics, such as their position or orientation, or their morphology. At this level, the information is still physical and geometric information describing a set of objects. The geometrical information uses the measured card and auxiliary information, potentially external to the system, the relation between light spots and objects.
(72) The biological level of abstraction, allows some understanding of the biological reality through a constitutive relationship between objects measured and corresponding biological entities. It contains a set of information on the biological object, mainly the position and its dynamics, its shape and morphology. The biological information uses the measured card and the geometrical information and auxiliary information, potentially external to the system, the relation of the light spots and objects with biological entities. A number of conclusions on the biological functionality of the sample can be obtained at this level.
(73) Conical refraction is an optical phenomenon predicted by Hamilton in 1832, and two months later confirmed experimentally by Lloyd. Conical refraction describes the propagation of a light beam in the direction of the optical axis of a biaxial crystal. Hamilton predicted that the light emerges in the form of a hollow cone of rays. Conical refraction is an important phase in the history of science and has played a role in the demonstration of the theory of electromagnetic waves.
(74) Conical refraction is an optical phenomenon predicted by W. R. Hamilton in 1832 (“Third Supplement to an Essay on the Theory of Systems of Rays,” Trans. Royal Irish., Acad., pp 1-144 (1833)), and two months later confirmed experimentally by Lloyd (“On the Phenomena presented by Light in its Passage along the Axes of Biaxial Crystals”, The London and Edinburgh Philosophical Magazine and Journal of Science ii, 112-120 (1833), and “Further Experiments on the Phenomena presented by Light in its Passage along the axes of Biaxal Crystals”, The London and Edinburgh Philosophical Magazine and Journal of Science H, 207-210 (1833)). Conical refraction describes the propagation of a light beam in the direction of the optical axis of a biaxial crystal. Hamilton predicted that the light emerges in the form of a hollow cone of rays. Conical refraction is an important phase in the history of science and has played a role in the demonstration of the theory of electromagnetic waves.
(75) A renewed interest in the conical refraction occurred in the last years of the twentieth century has led to a complete theory by M. V. Berry, et al. (“Conical diffraction asymptotics: fine structure of Poggendorff rings and axial spike,” Journal Of Optics A-Pure And Applied Optics, 6, 289-300 (2004)), Berry, et al. “Conical diffraction complexified: dichroism and the transition to double refraction,” Journal Of Optics A-Pure And Applied Optics, 8, 1043 (2006), and Berry, et al, “Chiral conical diffraction,” Journal Of Optics A-Pure And Applied Optics 8, 363 (2006)), validated experimentally in 2009 (C. Phelan, et al, “Conical diffraction and Bessel beam formation with a high optical quality biaxial crystal,” J. Opt. A, Pure Appl. Opt, 7, 685-690 (2009)). Here we follow the theory, terminology and definitions of Berry, including, from this point, the name change of the physical effect, using the more rigorous term of conical diffraction.
(76) Conical diffraction has attracted considerable theoretical and experimental, but “no practical application seems to have been found,” (MV. Berry, et al., “Conical diffraction: Hamilton's diabolical points at the heart of crystal optics,” Progress in Optics 50, 13 (2007)).
(77) Other effects exist, creating inherently weaker conical diffraction effects or creating conical diffraction along a short optical path. These effects include polymers, liquid crystals and induced externally birefringence effects. The polymers include but are not limited to: stretched polymer sheets and cascade polymerization, liquid crystals include but are not limited to: thermotropic biaxial nematic phase, the external effects induced birefringence include, but are not limited to: applying an electric field creating an electro-optical effect on a non-centrosymmetric cubic crystal, and the photo-elastic modulator.
(78) Other effects exist, creating inherently weaker conical diffraction effects or creating conical diffraction along a short optical path. These effects include polymers, liquid crystals and induced externally birefringence effects. The polymers include but are not limited to: stretched polymer sheets and cascade polymerization (A. Geivandov, et al. “Printable Thin Film birefringent Retarders for LCD”). Liquid crystals include but are not limited to thermotropic biaxial nematic phase (B. Acharya, et al. “Biaxial Nematic Thermotropic The Elusive Phase in Rigid Bent-Core Molecules,” Pramana 61, 231-237 (2003)); the external effects induced birefringence include, but are not limited to applying an electric field creating an electro-optical effect on a non-centrosymmetric cubic crystal (T. Aldonado, “Electro-optic modulators,” in Handbook of Optics, M. Bass, ed. (McGraw Hill, Orlando, 1995)); and the photo-elastic modulator (J. Kemp, “Piezo-Optical Birefringence Modulators: New Use for a Ion-Known Effect,” Journal of the Optical Society of America 59, 950-953 (1969)).
(79) Referring now to
(80) The spatial variable, R, the conical imaging plane, and the wave vector, U, are represented by cylindrical coordinates R, θ.sub.R et U, θ.sub.U. λ is the wavelength of light.
(81) The behavior of the electric field emerging from the conical crystal 32 is fully characterized by a single parameter, the radius conical R.sub.0; the conical radius depends on the material and geometrical characteristics of the crystal, as defined in [Berry, 2004].
(82) We introduce standardized parameters for the description below of the light distribution, to be valid in both conical imaging plane and at the focus of the microscope objective, in the limits of the scalar theory of diffraction.
(83) The normalized radial position, ρ, the wave vector normalized, u, represented by cylindrical coordinates par ρ, θ.sub.R et u, θ.sub.U, and the normalized radius conical ρ.sub.0 are given by:
(84)
(85) U.sub.0 being the numerical aperture of the system. For ρ.sub.0<2, we refer here to a thin conical crystal, for ρ.sub.0<<1, we refer here to the form of a linear thin conical crystal and for ρ.sub.0<0.5 to a thin sinusoidal conical crystal.
(86) The wave emerging crystal thin conical, E(ρ, θ.sub.R), expressed in normalized coordinates, is constituted by the superposition of two waves, referred to herein as the fundamental wave, E.sub.F (p), a regular wave, and vortex wave, E.sub.V (ρ,θ.sub.R), a singular wave; these two waves are coherent one with another, collocated, and circularly polarized with an inverse direction of chirality:
(87)
(88) In this equation, E.sub.F (ρ) is the scalar fundamental amplitude, F.sub.v (ρ) is the reduced scalar magnitude of vortex and they are given by:
E.sub.F(ρ)=2π∫du u cos(ρ.sub.0u)J.sub.0(ρu);F.sub.v(ρ)=2π∫du u sin(ρ.sub.0u)J.sub.1(ρu). (EQ. 6)
(89) For a thin linear conical crystal, the fundamental wave can be approximated by an Airy disk and the vortex wave can be approximated to a linear vortex, represented by:
F.sub.v(ρ)=2πρ.sub.0∫du u.sup.2 J.sub.1(ρu). (EQ. 7)
(90) Assuming that the action of partial polarizer, 29, is the scaling of the vortex wave by a parameter a, the Stokes parameters can be deduced from the above equations:
S.sub.0=(E.sub.F(ρ)).sup.2+(α.sup.2F.sub.v(ρ)).sup.2
S.sub.1=2αE.sub.F(ρ)F.sub.v(ρ)sin θ.sub.R;S.sub.2=2αE.sub.F(ρ)F.sub.v(ρ)cos θ.sub.R;
S.sub.3=(E.sub.F(ρ)).sup.2−(α.sup.2F.sub.v(ρ)).sup.2
β=θ.sub.R; (EQ. 8)
(91) We use the terms of “sparse object” to describe a set of light emitting point like emitters, of a number less than twelve, positioned in a volume whose size in each dimension is less than 3 wavelengths, at the wavelength of transmission or at the wavelength of the reflection of the emitters. The volume of a size less than 3 wavelengths that contains the sparse object is referred to as a analysis volume of reduced size.
(92) We refer now to
(93) The functionality of the volumic containment is limited in all three spatial dimensions, the observed region of the sample volume to a size as small as possible, analysis volume. The functionality of the volumic containment limits the analysis volume by the combination of two effects: the confinement of the light projected onto a small area, ideally the size of the Airy spot, 50, and the elimination of defocused light by the confocal hole, 28, of
(94) Consider a sparse object, 51, consisting of a plurality of fluorophores, 53 to 59. The fluorophores from 53 to 55 positioned in the test volume 60, and only they are both excited by the light source and the photons emitted by them arrive at the detector module. The fluorophores not located in the cone of illumination, 56 and 57 are not illuminated by the incident light. The light emitted by the fluorophores 58 and 59, located at the conjugate plane of the confocal hole, 28 of
(95) Two different Cartesian coordinates are defined in the system,
(96) The reference “i”: The axes referenced “i” represent a Cartesian reference system centered on the center of the analysis volume, 61.
(97) The reference “a”: the axes referenced “a” represents a Cartesian reference centered for each light nanoemitter on the nanoemitter considered as a discrete point, 62.
(98) When using the PSIT method, described later, if a vortex is projected on the sample, the center of the vortex will be generally defined as the center of the analysis volume.
(99) The confocal microscope limit the analysis volume using the volumic confinement described above. The volumic confinement volume is obtained by the combination of two effects: confinement of the light projected on a small surface, ideally of the size of the Airy disk, 50, and removal of defocused light by the confocal hole, 41. The superposition of these two effects creates a small volume, the analysis volume 60. This volume determines the size of the elementary cell detected by the system.
(100) At least one embodiment of the invention uses conical diffraction to realize the fundamental optical modules of the technique. However, alternative implementations, replacing the modules based on conical diffraction by modules based on other optical concepts, are able to provide the same functionality. They are part of the scope of this invention. Alternative optical concepts include but are not limited to uniaxial crystals, subwavelength gratings, structured laser modes, holographic components and other techniques known to the skilled in the art.
(101) These concepts, techniques and optical and optoelectronic devices are known to those skilled in the art and all such optical means are described in numerous publications such as the book written by D. Goldstein, “Polarized Light”, Pawley, “Handbook of Biological Confocal Microscopy,” Bass, “Handbook of Optics”, and many other publications known to those skilled in the art.
Acronyms
(102) We use in this paper the acronym, SRCD, “Super Resolution using Conical diffraction” to name the platform, modules and systems specific to the preferred implementation of this invention.
(103) We use in this paper the acronym PSIT “Projected Sequence of Intensities with various topologies”
(104) We use in this paper the acronym, PDOS, “Position Dependent Optical Semaphore”.
(105) The SRCDP platform, “Conical diffraction using Super Resolution Platform” is a platform for microscopy, implementing the measurement methodology and using optical modules based on conical diffraction.
(106) SRCDP platform is the preferred implementation of the measurement methodology. We use in this paper the acronym LatSRCS to name the optical module implementing the PSIT method for the preferred implementation of this invention.
(107) We use in this paper the acronym LongSRCS to name the optical module implementing the preferred implementation of the method PDOS of this invention.
(108) Some embodiments of the present invention comprise a new measuring methodology; the measurement methodology, and a coherent set of systemic and algorithmic method, hardware tools, software tools and algorithms for its implementation
(109) The measurement methodology according to embodiments allows acquisition of nanosized optical data and image superresolution.
(110) The measurement methodology is primarily, but not exclusively, used for the measurement of super-resolved biological samples data marked with fluorophores.
(111) The measurement methodology can be implemented using the different methods of measurement and processing algorithms, described below.
(112) Among other things, the measurement methodology can be implemented together or separately using two new measurement methods, referred to as: PSIT Projected Sequence of Intensities with various Topologies,” and PDOS, “Position Dependent Optical Semaphore”.
(113) Some embodiments of the invention also relate to a system—a platform for microscopy—implementing the methodology of measurement using the measurement methods PSIT and PDOS. This system, the SRCDP platform, “Conical diffraction based Platform Super Resolution” is the preferred implementation of the measurement methodology.
(114) These concepts, techniques and optical and optoelectronic devices are known to those skilled in the art and all such optical means are described in numerous publications such as the book written by D. Goldstein, et al. (“Polarized Light” (CRC, 2003), Vol. 83). The “Handbook of Microscopy Confocal”, J B Pawley, (Springer Verlag, 2006). “Handbook of Optics”, M. Bass, (McGraw-Hill, 2001)) and many other publications known to those skilled in the art.
(115) Additionally, the SRCDP platform includes an improved detection module, a control module of the system, and software support.
(116) The measurement methodology comprises using both measurement methods, the methods PSIT and PDOS. However, in some applications, the use of both methods may not be necessary, we will refer in this case to the simplified measurement methodology, which is part of the scope of this invention.
(117) Some embodiments of the invention also relate to methods of using the measurement methodology for measuring distribution of fluorophores and fluorophores, and monitoring in two or three dimensions of fluorophores.
(118) In addition, certain embodiments of the invention relate to a large number of variants of implementations of the methodology and methods PSIT and PDOS, platform SRCD, optical modules and LatSRCS LongSRCS and algorithmic SRCDA.
(119) The functionality of the confocal microscope described by Minsky, and explained previously, is limiting in three spatial dimensions, the observed region of the sample volume to a size as small as possible, the volume analysis.
(120) The functionality of the confocal microscope described by M. Minsky (“MicroscopyApparatus,” (Google Patents, 1961)) and explained previously, is limiting in three spatial dimensions, the observed region of the sample volume to a size as small as possible, the volume analysis.
(121) Referring now to
(122) In
(123) A system implementing the method according to at least one embodiment of the invention is capable of recovering independently and accurately the attributes of several fluorophores in a luminous volume of dimensions similar to those of confocal microscopy. To achieve this goal, the methodology according to some embodiments of the invention is designed to create optically for each illuminated volume, a large amount of information in both time and spatial domains.
(124) The most developed process of the measurement methodology, according to an embodiment of the invention, can be segmented into seven steps, five optical steps, an optoelectronic detection step and an algorithmic step.
(125) Optical Steps:
(126) Projection of a sequence of compact light distribution of different topologies on the analysis volume Emission of fluorescent light by fluorophores Imaging of fluorophores in the focal plane Separation of the reflected light detected in several independent channels simultaneously and/or sequentially Optional limitation in the focal plane of the analyzed light
Detecting Step
(127) Detecting the light intensity by one or more point like or matrix photodetectors.
(128) Algorithmic Step:
(129) Reconstruction of the list of fluorophores, constituting the sparse object, and their attributes from the set of the detected images,
(130) According to another embodiment of the present invention, the measurement methodology consists in the realization of optical steps, previously described and omitting either the first is or the fourth optical step.
(131) The compound optical process that implements the methodology comprises: performing a series of optical measuring processes, controlled by the control module of the system, by varying the sequence of illumination and/or the functionality of the channels and/or the position of the sequence illumination as function of measured data or of external information. An example of compound optical process implementing the methodology according to an embodiment of the invention will be detailed below.
(132) The intermediate result, the raw information is obtained at the end of the detection step. Raw information comprises a set of images A.sub.op(m, n) representing for the o light distribution, the image from the detection channel p.
(133) As in a confocal microscope, the measurement process analyzes a small volume in a much larger object. It will therefore require the addition of additional modules, similar to those of a confocal microscope including a scanning process, a software module integration, analysis and visualization of data points in surfaces and/or three-dimensional objects.
(134) A method of measurement PSIT according to one embodiment of the invention, projects a sequence of light distributions of different topologies, on the analysis volume.
(135) The measurement method PSIT, performs the following functions: Projection of a sequence, the emission sequence of compact light distributions of different topological families on a sample, and For each compact light distribution: Emission of light by fluorophores on the sample, Creation, by means of the microscope optics, of an optical image, Acquisition of the optical image on a photodetector and creation of a digital image.
(136) In more detail, it is noted that:
(137) The transmission sequence comprises at least two point like light distributions, of different topological families
(138) The transmission sequence is projected onto a biological sample labeled with fluorophores which are referenced as light nanoemitter.
(139) The light emitted, emerging from each light nanoemitter, is dependent for each nanoemitter of the light intensity, in the incoherent case or on the electromagnetic field, in the coherent case, incident on the three-dimensional spatial position of the light nanoemitter, the aforesaid light sampling property of the nanoemitter discussed previously.
(140) For each light distribution pattern of the transmission sequence projected on the sample, an optical image is created.
(141) The set of images corresponding to all the light distributions of the transmission sequence is referred to as the sequence of images.
(142) The PSIT method according to this embodiment can acquire mainly lateral information, that is to say, the lateral position of each of the fluorophores.
(143) In a preferred embodiment, the PSIT method is implemented by the projection of light distributions of different topologies created by conical diffraction and modified by a variation of the polarization states of input and output.
(144) A PDOS method according to an embodiment of the invention includes the distribution of an “optical semaphore” of the light reemitted by the fluorophores between at least two detectors.
(145) Ideally, the function of the optical semaphore is to separate different areas of the test volume on different detectors. Practically, the optical semaphore creates, for each detector, a transfer function of the light emitted by a light nanoemitter, depending on the position in space of the light nanoemitter and different for the different detectors.
(146) In a preferred embodiment, the PDOS method is implemented to separate on different detectors the collimated light, emerging from fluorophores positioned at the focal plane of the lens, from non-collimated light emerging from fluorophores lying within or beyond the focal plane.
(147) The PDOS method, in its preferred embodiment, allows acquiring essentially longitudinal information, that is to say, the longitudinal position of each of the fluorophores.
(148) Mathematically, the method according to some embodiments of the invention provides a transfer function converting the spatial distribution of the fluorophores in space in unprocessed information consisting of a set of images. The algorithmic performs the inverse operation: it reconstructs the spatial distribution of the fluorophores in space from the set of images in the unprocessed information.
(149) In mathematical terms the algorithm solves an inverse problem or parameter estimation. The model equations are known and the number of fluorophores in a sparse object is a-priori limited. 11 the mathematical procedures known to those skilled in the art can be used for solving inverse problems and parameter estimation. We describe later an example of algorithm adapted specifically to the measurement methodology according to an embodiment of the invention.
(150) In addition, we present, for its symbolic value, a new solution to the problem of discrimination of two points located at a small distance from each other. This problem studied by Lord Rayleigh, is the base of the resolution criterion in many areas of Optics.
(151) It has thus been described, rather broadly, the characteristics of the invention in order that the detailed description thereof may be better understood, and in order that the present contribution to the art may be better appreciated. Many additional features of the invention will be described below.
(152) The preferred implementation of the method according to one embodiment of the invention is a hardware platform and algorithms, referred to as the SRCDP platform, 500, shown in
(153) The SRCDP platform, 500, implements the method according to an embodiment of the invention, by combining the two methods PSIT and PDOS above.
(154) The platform SRCDP observed,
(155) The platform SRCDP, 500, as shown in
(156) In its hardware part:
(157) A confocal microscope 200, adapted or optimized, similar to the confocal microscope, described previously, and including all appropriate components, as previously described Two new and complementary optical modules, mounted on a standard microscope. The two new optical modules are optical modules LatSRCS, 700, and LongSRCS, 800, described in detail later with reference to
LatSRCS Optical Module Implementing the PSIT Method
(158) We describe, with reference to
(159) The optical module LatSRCS, 700 according to this embodiment is an optical module, projecting on a plurality of fluorophores in a sample, a sequence of compact light distributions of different topology. Each fluorophore fluoresces with a sequence of fluorescent light intensities dependent on the incident intensity on the fluorophore and characterizing the lateral position of the fluorophore. In most embodiments, the light compact distributions of different topologies are created by interference with variable amplitudes and phases between an ordinary wave and singular wave. In the preferred embodiment, the regular and singular waves are created by a thin conical crystal.
(160) The optical module LatSRCS, 700, is positioned in the illumination path of the confocal microscope 200; it projects a sequence of compact light distributions of different topologies on the sample 11 using the confocal microscope objective 200. In the preferred embodiment using the conical diffraction, the incident intensity at a specific position on the sample 11 will be proportional for each light distribution pattern, to a specific combination of the Stokes parameters.
(161) The optical module LatSRCS, 700, uses an inherent feature described above, specific to the fluorophore, which samples the intensity of light incident on its precise position (the fluorophore), and reemits fluorescent light dependent on the incident light. It is remarkable that the measured information is directly related to the position of the fluorophore in the compact light distribution, relayed by the Stokes parameters. This information is frozen by the functionality of the fluorophore, its ability to absorb and re-emit light, breaking the optical chain. This information is carried by the fluorescent light as an emerging light distribution recoverable by a detector assembly 65.
(162) If the incident light varies temporally according to a sequence of compact light distributions of different topologies, the intensity of the fluorescent light reemitted varies in the same proportions. The sequence of the re-emitted fluorescent light is proportional to the sequence of compact light distributions of different topologies. From this information, it is possible to retrieve the position of the fluorophore, as explained below.
(163) The PSIT method, according to embodiments of the invention, refers to the projection of a sequence of compact light distributions of different topologies in a microscope, the interaction with fluorophores, collecting the reflected light by the objective of microscope, 22, detecting the fluorescent light by the improved detector assembly 65, and the analysis of the information by a suitable algorithmin some embodiments, the improved detection assembly, 65, comprises a single detector, and recovers only the overall intensity as a function of time, while in other embodiments the improved detection assembly comprises a small area of pixels and recovers also the spatial distribution of the fluorescent light. All retrieved information consisting of a plurality of images, the named as lateral superresolution images.
(164) In a preferred embodiment, the contribution of a fluorophore in the illuminated volume positioned in a specific lateral superresolution image is proportional to a specific combination of the Stokes parameters of the incident light at the fluorophore position.
(165) Lateral superresolution images, the information created by compact light distributions of different topologies, is new and was not present in the prior art. This new information helps to refine the position of the fluorophores, to quantify the number of fluorophores present in the illuminated volume and to differentiate multiple fluorophores present in the same volume.
(166) We refer now to
(167)
(168) Referring to
(169) Referring to
(170) We denote mainly the following light distributions: The fundamental,
Redundancy and Random Phase Variations
(171) The elementary light distributions described in
(172) This redundancy allows some averaging of random phase errors inevitably present in many measurement process of biological objects. This reinforces the robustness of the measurement methodology of the embodiments of the invention and its applicability.
(173) New light distributions can also be obtained as mathematical combinations of elementary light distributions. The “pseudo-vortex”, light distribution, calculated from arithmetic combinations of the four distributions Stokes has the feature of having a strong gradient at the origin.
(174) Method PS1T was originally designed to allow lateral superresolution, however PS1T method can also be used to obtain the longitudinal position of a fluorophore. Indeed, some elementary light distributions are relatively insensitive—within reasonable limits—to a variation of the longitudinal position of the fluorophore, others are rather sensitive. A sequence of compact light distributions, some of them independent and some of them depend on the longitudinal position would reveal the longitudinal position of fluorophores.
(175) In addition for the light distributions which are highly dependent on the longitudinal position of the fluorophore, a series of elementary light distributions slightly shifted longitudinally, one relative to the other can be projected on the sample, allowing a set of images containing longitudinal information.
(176) In addition, some more complex elementary light distribution, consisting of more complex overlapping of waves with a strong longitudinal dependence exist, eg the “three-dimensional dark spot” described by Zhang, which create a black spot surrounded in three dimensions by a luminous sphere. As described by Zhang, “Investigation and applications of the three dimensional (3D) dark spot surrounded by a light shell in all directions have attracted a great deal of attention. In superresolution fluorescence microscopy, a 3D dark spot is used as the erase beam”. These “three dimensional dark spots” consist of a superposition of Laguerre-Gauss functions, which can be achieved within a laser cavity or using a hologram or a phase plate, as suggested by Zhang, or using uniaxial or conical crystals as suggested by the inventor.
(177) In addition, some more complex elementary light distribution, consisting of more complex overlapping of waves with a strong longitudinal dependence exist, eg the “three-dimensional dark spot” described by Zhang (“Generation of three-dimensional dark spots with a perfect light shell with a radially polarized Laguerre—Gaussian beam,” Applied optics, 49, 6217-6223 (2010)), which create a black spot surrounded in three dimensions by a luminous sphere. These “three dimensional dark spots” consist of a superposition of Laguerre-Gauss functions, which can be achieved within a laser cavity or using a hologram or a phase plate, as suggested by Zhang, or using uniaxial or conical crystals as suggested by the inventor.
(178) Vector Effects
(179) The theory developed so far describes the light distribution in the imaging plane of the microscope 35. The distribution of the light projected onto the sample is, according to the theory of the geometrical imaging, a reduced image of the light distribution in the image plane.
(180) However, as described extensively in the literature, for a high numerical aperture objective, the imaging geometric theory is not accurate and vector effects must be taken into account. These effects consist essentially in the presence of a component, longitudinally polarized.
(181) Referring again to
(182) Alternatively, the output polarization adaptation submodule, 74, may be variable and adapted to the topology and the symmetry of each of the compact light distribution.
(183) LongSRCS Optical Module Implementing the PDOS Method
(184) We describe below an optical module LongSRCS with more details. The system of longitudinal superresolution, according to an embodiment of the invention, channels the incident light intensities of a plurality of point sources located in a small illuminated volume, either on separate detectors or on distinct geometric positions on the same detector or on a combination of both, as function of the spatial position of each point source.
(185) In simpler words, the intensity emitted by a fluorophore positioned longitudinally at the point A will be physically separated from the intensity emitted by a fluorophore positioned longitudinally to point B.
(186) The optical module LongSRCS, according to an embodiment of the invention, allows the separation in volume slices, different slices of the illuminated volume being physically separated on different sets of detectors.
(187) In the preferred embodiment, which will be explained below, the optical module LongSRCS separates an illuminated volume in at least three adjacent slices, separating the middle slice from of the other two slices on sets of independent improved detectors, and creating a spatial differentiation between the two remaining slices on the same set of improved detectors.
(188) We refer now to
(189) The optical module LongSRCS channels the incident light intensity of a plurality of point sources located in a small volume of light, either on separate detectors or on distinct geometric positions on the same detector either a combination of both, depending on the longitudinal position of each point source.
(190) In a preferred embodiment, it operates on the fluorophores, represented by 80, 80′ or 80″, according to their longitudinal position. It comprises a first collimating lens 81, which may consists, in some embodiments of the microscope objective 4.
(191) The fluorophore 80 is positioned in the focal plane of the collimating lens 82, the light from the fluorophore 80, emerging from the collimating lens 81 is collimated.
(192) The fluorophores 80′ and 80″ are placed before and after the focal plane of the collimating lens, 82, at a distance of ±Δz, the light from the fluorophores 80′ or 80″ emerging from the collimating lens 81 being convergent or divergent.
(193) The LongSRCS optical module includes a polarization beam separator, shown in
(194) Two quarter waveplates, 86 and 87, transform, for each channel, the linearly polarized circular polarizations.
(195) A conical crystal is placed in each of the channels 88 and 89. In each channel, a conical diffraction setup, as described in the
(196) For the fluorophore, 80, positioned in the focal plane of the collimating lens 82, the light emerging from the collimator lens 81, is, as discussed above, collimated; referring to the setup of conical diffraction, the Numerical Aperture of the collimating lens 81, in the image space, and the normalized radius cone are zero, so that the effect of conical diffraction on the beam from the fluorophore 80 is zero. Therefore, the conical crystal does not change the geometry of the fluorescent light emitted by the fluorophore, or its polarization, which remains circular with the same chirality.
(197) For fluorophores, 80′ or 80″, which are not positioned in the focal plane of the collimating lens 82, the light diverges or converges; Referring again to setup conical diffraction described above, the Numerical Aperture in the image plane of the collimating lens 81, which is equivalent to the first lens of the conical diffraction setup, 31, is non-zero. For a given value Δz defocus, positive or negative, most of the light emerging from the crystal is contained in the conical wave vortex, which has a form of a vortex, and is inverted chirality.
(198) The functionality of the conical diffraction setups positioned in each of the channels is to distinguish the collimated light from the light converging or diverging by reversing the chirality of the circular polarization of the light for converging or diverging light.
(199) Two other blades quarterwave plates, 90 and 91 transform the circular polarizations, emerging each channel, linear polarizations. We refer, for each channel, to the linear polarization, which would have emerged from the retardation plate, if the crystal had been removed, as the polarization of collimation
(200) The optical module comprises a LongSRCS combiner/separator of four ports, shown in
(201) For each channel it separates the two polarizations, and merges the two polarizations of collimation in the same path, the path of collimation, 93, and the polarized light orthogonal to the collimation polarization in another path, the path of non-collimation, 94. The directions of the axes of the quarterwave plates, 86, 87, 90 and 91 must be chosen appropriately. The combined beams do not interfere, because they come from originally unpolarized beam.
(202) The incident light into the path of collimation is focused onto the detector of collimation, 96, using the focusing lens of the collimating path, 95, which behave functionally as the second lens, 32, of the conical diffraction setup.
(203) In the path of non-collimation, an additional lens 97 is inserted, and the additional lens 97, together with the collimating lens, 81, creates a new lens system, 98, whose focal plane, 99 is positioned at a different position of the focal plane of the collimating lens 82, the position of the fluorophore 80′. An additional quarter waveplate, 100, cancels the action of the quarter waveplates, 90 or 91, turning back the incoming beams of each of the channels of polarization, to the circular polarization, which they were at the output polarized crystals conical, 88 or 89.
(204) An additional conical crystal, 101, is added in the way of non-collimation as a third conical diffraction setup—the auxiliary conical diffraction setup—with the system of lenses 98, acting as the first lens of the conical diffraction setup, 31.
(205) The fluorophore 80′ have been positioned before the focal plane of the collimating lens, 82, at a distance of Δz, but, relative to the lens system 98, it is positioned at the focal plane 99. The light from the fluorophore 80′ had already been converted into a vortex by one of the conical diffraction setups consisting of the collimating lens 81, and one of the conical crystals 88 or 89, depending on the channel of polarization traveled by the light. The light from the fluorophore 80 is collimated at the output of the lens system, 98, after the additional lens, 97.
(206) Referring to the new conical diffraction setup, the numerical aperture of the lens system in the image space, and the normalized conical radius are zero for fluorophore 80; the effect of conical diffraction, of the auxiliary diffraction setup, on the beam emerging from fluorophore 80 is zero. Therefore, the conical crystal does not change the geometry of the fluorescent light emitted by the fluorophore. Light incoming from a fluorophore 80′, is a vortex before and after the conical crystal 98.
(207) The fluorophore 80″ had been placed after the focal plane of the collimating lens, 82, at a distance of Δz on the lens system 98; it is placed at a distance of −2 Δz of the focal position, 99, and the light from the fluorophore 80″ converge also to the output of the lens system, 98, after the front lens, 97. The light from the fluorophore 80″, has already been converted into a vortex by one of conical diffraction setup consisting of collimating lens 81, and either one of the conical crystals 88 or 89, depending on the channel of polarization followed by the light. The conical crystal 101 changes the light from fluorophore 80″, and, for relevant parameters of the material, i.e. the size and orientation of the conical crystal, it reverts to a regular wave, slightly different from the Airy disk.
(208) The objective lens of the non-collimation path, 102, is adapted to focus the plane containing fluorophore 80″, which is a regular wave, 104, and not the fluorophore 80′, which is singular on the pixelated detector assembly, 103. The incident light emerging from a fluorophore positioned in plane 104, such as the fluorophore 80″, is perfectly focused and is positioned at the center of the pixelated detector, 103. Incident light emerging from a fluorophore positioned at the plane 99 is a vortex and therefore focuses on an outer ring with a central zero. By separately recording the intensity at the center and the intensity at the outer part of the detector, it is possible to separate, with a slight overlap between the incident light planes 104 and 99. In addition, a fluorophore positioned at the plane 104 as the fluorophore 80, is slightly delocalized because the objective is calculated so as to focus on the detector plane 104. This improves the action of the optical module LongSRCS, pushing further the intensity of the vortex center and reducing duplication.
(209) This simplified description of a preferred embodiment of the optical module LongSRCS, 800, allows many possibilities of variations and adaptations by changes in the optical design through changes known to one skilled in the art. These changes include, but are not limited to: the crystal material and orientations, the choice of polarization components, the choice of the polarization axes of the cascade elements, the number of sensors, or, reversing the roles of fluorophores 80′ and 80″. In addition, the module is ideally conditioned to be constructed as a set of monolithic subsets or even as a single monolithic unit.
(210) Method PDOS and Lateral Measurements
(211) Method PDOS was originally designed to allow longitudinal superresolution, however PDOS method can also be used for measuring the lateral position of a fluorophore. Indeed, the elementary light distributions are also sensitive to variation of the lateral position of the fluorophore. For a plane sample, in the case where the light projection is not possible, the method PDOS may replace the method PSIT for performing superresolution measurements
(212) All these variants of the measurement methodology are considered part of the invention. The inventor has yet chosen in the preferred implementation to separate into two disjoint, separated, but complementary optical modules the lateral measures from the longitudinal measures to reduce the complexity of each one of the add-ons.
(213) Detection Module
(214) The corollary of the potency of the measurement methodology is the requirement of a more complex detection module, able to detect and retrieve information created. In scanning confocal microscopy, the detector is a detector consisting of a single element as a PMT or SPAD. The acquisition time of the detector is determined by the scanning mechanism.
(215) The measurement methodology requires, in some embodiments, two detector modules, instead of one, the fundamental and vortex detector modules. In addition, the measurement methodology requires, in some embodiments, for each illuminated volume, the acquisition of the optical information on a small spatial grid, typically 16*16, at a rate higher than the pixel time, due to the requirement to identify and quantify the sequential signals.
(216) An improved detection module, 65, may be implemented using small detectors with low number of pixels. Such a module would not have been possible ten or twenty years ago, due to the lack of appropriate technologies. Today, small detectors with small number of pixels, at high speed, with low noise characteristics are available on the basis of several technologies: SPAD arrays with a small number of pixels, such as 32*32 have been shown recently with acquisition rates up to 1 MHz. The improved detector module 65, may also be implemented using CCD, EMCCD or CMOS sensors. CCD sensors, CMOS and EMCCD with a small number of pixels exist or can be specifically designed. In addition, CCD sensors, CMOS EMCCD can be used using features as region of interest, sub-windowing or “binning”, available in many detectors.
(217) The spatio-temporal information referenced herein is the position and the time of the impact of each fluorescent photon. In real systems, the spatio-temporal information is corrupted by the noise of the detector, which creates incorrect photons, and by inefficient detection, creating photons which are not detected, thereby reducing performance. In SPAD arrays, for each photon, the pixel that has detected it and the time of impact are received, i.e. the full spatiotemporal information is available. For CCD sensors, CMOS or EMCCD, the acquisition of multiple frames is necessary to approximate the spatio-temporal information.
(218) In several implementations we will refer to separate detectors; in many cases the sensor can be either physically separated or consisting of different areas on a single detector, or a combination of the two previous cases.
(219) Algorithms SRCDA
(220) As stated previously, the algorithmic SRCDA can be implemented using the inverse problem methods of estimating parameters methods known to those skilled in the art.
(221) We also present an algorithm according to one embodiment, specific to the measurement methodology, based on a set of descriptors.
(222) Referring now to
(223) An algorithmic procedure, presented in the
(224) The preprocessing procedure, 111, reorganized the spatiotemporal information, 110, in sets of superresolution images, 112. This can be done using a filter bank procedure. The data set is then a small series of small images, typically 16*16 pixels. The pretreatment procedure is applied to a small number, of the order of several thousand, of spatiotemporal elements; it can be performed in real time using existing hardware.
(225) The procedure descriptor, 113, the main step of the calculation, created from each image, a set of descriptors, 114, and their statistical significance. Descriptors include, but are not limited to: the intensity of each image, the presence in the image of a light distribution and its characterization as a regular distribution or as a vortex, the center of gravity, and moments of first and higher orders.
(226) The third step is a filtering operation, 115, wherein only the descriptors that are statistically relevant, are retained.
(227) The classification operation, 116, is the last step of the algorithm. The algorithm is capable of recognizing, on the basis of the set of descriptors, 114, and a knowledge base, 117, where the measurement different cases as a single fluorophore, two fluorophores separated longitudinally or laterally and three or more fluorophores.
(228) Note that, due to the amount of information created, numerous cases that were ambiguous in fluorescence microscopy will be clearly identified. For example, as described in more detail later, a single fluorophore must meet a long list of conditions and cannot be confused with a case of multi-fluorophore. Two longitudinally separated fluorophores will create independent sets of descriptors and two laterally separated fluorophores differ clearly on at least one descriptor from a single fluorophore.
(229) Algorithm Process Consists Implementing Optical Measurement Methodology
(230) The compound optical process according to at least one embodiment of the invention is the logical complement of the descriptors algorithm. Indeed, the result of the descriptors calculation procedure can lead to the conclusion that an additional image would improve performance of the measurement. The SRCDP microscopy platform allows the acquisition of one—or more additional images from a set of light distribution of the PSIT or PDOS methods.
(231) An example is explained below.
(232) Position Measuring Point by the Method PSIT
(233) PSIT method can be used as a technique for measuring the position of a fluorophore with high precision. This measure can use the descriptors algorithm presented previously.
(234) Consider a fluorophore positioned at the position x, y in Cartesian coordinates and (ρ, θ) in polar coordinates.
(235) A sequence of illumination consisting of a fundamental wave, and a couple of the so-called “half-moon” distributions aligned along orthogonal axes is projected onto the fluorophore.
(236) The preprocessing procedure created two images: A “top hat” image consisting of the sum of the three images of the sequence. A vortex image consisting of the sum of the two images half-moons. A first descriptor is the Cartesian position is calculated using the algorithm of the centroid of the image “top hat”.
(237) Referring to
(238) The azimuth position can be measured by measuring the intensity ratio between the total intensity emitted by the fluorophore illuminated by the first half-moon distribution, I.sub.H, and the total intensity emitted by the fluorophore illuminated by the second half moon distribution, I.sub.ve. The ratio between these two intensities is a geometric tangent square law:
(239)
(240) Both measures are redundant. This redundancy is a measure to qualify the observed object as a single point and separate it from other objects potentially present in the sample.
(241) Representation in a Higher Dimensional Space: CartesianoPolar Representation
(242) This result can be generalized. We introduce in this paper an entire new representation of a plane, combining the Cartesian representation and the polar representation. We named this representation the CartesianoPolar representation. A point in the plane is represented by a quadruplet: x, y, p, θ. This representation is non-Euclidean and redundant. A similar representation of space can be defined mutatis mutandis.
(243) At first sight this representation seems unnecessary: it is a highly complex representation for a much simpler reality. It is well known that the position of a point in a plane can be represented, alternatively, either by using the Cartesian coordinates, x and y, or either by using polar coordinates p and θ.
(244) Representation in a Higher Dimensional Space: Pythagoras Space
(245) In this paper only the simplified version of the CartesianoPolar representation is detailed, wherein a point with coordinates x, y and p is represented. We named this space the space of Pythagoras.
(246) Defining the geometric area to be a two-dimensional surface in three-dimensional space, which fills the constitutive geometric equation ρ.sup.2=x.sup.2+y.sup.2; assumes a measurement system that simultaneously measures x, y and ρ, as the measurement system such as described in the previous paragraph together with a centroid algorithm on the same data. A point will be physically positioned in the space of Pythagoras, on the geometrical surface. Consider the case of two or more physical points: The center of gravity of the two points of measurement is outside the geometric surface and creates a point outside this area. This representation is a mathematical formalization and generalization of the deterministic algorithm for separating the case of an isolated point from that of an aggregate of points previously described.
(247) Recognition and Measurement of Two Points: A New Resolution Criterion
(248) Consider now two fluorophores positioned symmetrically about the center at positions, ρ, θ and ρ, −θ in polar coordinates. We will use the system described in the previous paragraphs. Three descriptors give the following results:
(249) The centroid measure the centroid of the light distribution, which will be the origin, The identifier ρ, measure the value of the common radial value of the two fluorophores, The θ descriptor, which in the case of half-moons contains a degeneracy between θ and −θ, will measure the value θ.
(250) As mentioned above, if the value of the descriptor p is not zero, we know that the case study is not a point but two or more. In addition, descriptors p and 0 allow us to measure the characteristics of the two points at a much higher resolution than that defined by the Rayleigh criterion. Moreover, using a compound process it is possible to separate this case from the vast majority of cases of three or more points. An additional light distribution can be projected onto the sample, a half-moon inclined at an angle θ; the assumption of the presence of two points will be confirmed or refuted based on the results of this image. Indeed, the measured energy will be zero for two points, for a line or for a series of dots aligned in the direction of the angle θ.
(251) Control Module
(252) With reference to
(253) The control module, 1100 (shown in
Alternative Implementations of the Measurement Methodology
(254) In one embodiment of the PSIT method, regular and singular waves are created by the propagation of a incident regular wave through a uniaxial crystal, replacing the conical crystal 32.
(255) In another embodiment of the method PSIT, regular and singular waves are created by the positioning at the Fourier plane of an optical system of a phase plate—such as a spiral phase plate—or a subwavelength grating, or by positioning a suitable holographic optical element.
(256) In another embodiment of the PSIT method—thick point, not shown, the illumination of the sample comprises a sequence of at least two compact compound light distributions, every compact compound light distribution being composed consisting itself of at least two simple compact light distributions projected simultaneously. Said at least two simple compact light distribution being optically coherent, partially coherent or incoherent relative to each other, said at least two simple compact light distributions being positioned at different spatial positions and said at least two simple compact light distributions differing in at least one of characteristics, such as their central lateral position, their central longitudinal position, their polarization, amplitude or phase. The ensemble of simple compact light distributions contains compact light distributions from different topological families.
(257) In another embodiment of the PSIT method, not shown, compact light distributions are created by different modes of a multimode laser, and the sequence of compact light distributions is created by successively creating modes or, alternatively, by controlling the balance of energy between the modes.
(258) In another embodiment of the PSIT method, not shown, the relationship between regular and singular wave is dynamically changed.
(259) In another embodiment of the PSIT method, not shown, the regular wave and the singular wave are created by a physical separation of the incident beam—in at least—two paths, the transformation in one path, of the regular beam to singular being realized by known means such as phase plates or spiral phase plates, holographic optical element, subwavelength gratings, uniaxial or biaxial crystals or combination thereof, and the recombination of the two beams using a beam combiner into a single beam. In this embodiment, the differentiation of the compact light distributions can be performed either on the combined beam or on each beam, independently after separation and before recombination.
(260) In another embodiment of the method PSIT, dynamic following, not shown, the system comprises means, including but not limited to controllable mirrors, electro-optical or acousto-optical devices or piezoelectric actuators capable to move the compact light distribution or the sequence of compact light distributions in space with high precision. In the system of dynamic monitoring, the position of the compact light distribution and of the sequence is dynamically controlled so as to follow at least one specific target.
(261) In another embodiment of the method PSIT, black fluorophore, not shown, the compact light distribution or a mathematical combination of compact light distributions is configured so that there is zero intensity at the center of the compact light distribution. The system comprises means adapted to move through space the compact light distribution and these means are used to follow the fluorophore and for positioning the fluorophore at its center, a function of time. When the fluorophore is positioned at the center of the compact light distribution, without movement, its position can be measured with high accuracy without fluorescent light emerging from a fluorophore, thereby substantially reducing the effects of photo-bleaching. A movement of the fluorophore can be compensated by appropriate movement of the position of the compact light distribution to follow the fluorophore using a small amount of emitted fluorescent light.
(262) In another embodiment of the method PSIT, dynamic sequences choice, not shown, the system dynamically determines, on the basis of a positioning hypothesis or of a first set of measures, the optimal sequence of compact light distributions.
(263) In another embodiment of the method PSIT, sequences choice and dynamic positioning of the compact light distribution, not shown, the system comprises means, including but not limited to controllable mirrors, electro-optic and acousto-optic devices or piezoelectric actuators, capable of moving in space the compact light distribution, or a combination of compact light distributions with great precision. The system dynamically determines, on the basis of a positioning hypothesis or of a first set of measures, the optimal sequence and the position of the compact light distributions.
(264) In another embodiment of the PSIT method, PSIT method of triangulation, two or more measurement process of the method PSIT, previously described, are carried out on the same sample with different projection axes. The variation in lateral position between the two measurements permits the measurement of the longitudinal position of nanoemitter light.
(265) In another embodiment of the PSIT method, the parallel PSIT method, light is incident on a micro lens array—or other optical means, known to those skilled in the art, allowing the realization of a set of light distributions in parallel, these light distributions being modified by an optical module to perform simultaneously the PSIT method on a large number of discrete points.
(266) In another embodiment of the PSIT method the multispectral PSIT method (not shown), the sample is illuminated sequentially or simultaneously by at least two illumination sequences, each sequence projecting light onto the sample at different wavelengths
(267) In another embodiment of the method PDOS, not shown, the channeling of the incoming light from different point sources according to their longitudinal position is realized in the focal plane. It is carried out using an element having polarization properties dependent on the lateral position. Light entering from a point disposed longitudinally relative to a determined plane, will be incident on a given position and will have specific polarization properties, and the incident light from points located at different longitudinal—and lateral—positions, will be incident on other positions in the focal plane, which have different polarization characteristics.
(268) As to a further discussion of the manner of usage and operation of the invention, it should be apparent from the above description. Therefore, any discussion on the form of the use and operation will not be described.
(269) In this respect, before explaining at least one embodiment of the invention in detail, it is understood that the invention is not limited in its application to the details of construction and arrangements of the components set forth in the following description or illustrated in the drawing. The invention is capable of other embodiments and can be practiced and carried out in various ways. In addition, it is understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
(270) References cited herein teach many principles that are applicable to the present invention. Therefore, the entire contents of these publications are incorporated herein by reference, as appropriate to the teachings of additional or alternative details, features and/or technical information.
(271) The embodiments of the invention described can be integrated on a fluorescence confocal microscope. Superresolution system according to embodiments of the invention is a new method of measurement, in addition to or in replacement of existing methods of microscopy. However, the superresolution system according to embodiments of the invention may equally be integrated on other microscopy platforms. These microscopy platforms, as described as examples, include but are not limited to: wide field microscopes, Bright field microscope, dark field microscopes, polarization microscopes, phase difference microscopes, differential interference contrast microscopes, stereo microscopes, Raman microscopes, microscopes dedicated to a specific task, such as live cell imaging, cell sorting, cell motility or any other instrument optical microscopy as described for example in Nikon (2011).
(272) It is understood that the invention is not limited in its application to the details set forth in the description contained herein or illustrated in the drawings.
(273) The invention is capable of other embodiments and of being practiced and carried out in various ways. Those skilled in the art will readily understand that various modifications and changes can be applied to the embodiments of the invention as described above without departing from its scope as defined in and by the appended claims.