RECONSTRUCTION ALGORITHM FOR FOURIER PTYCHOGRAPHIC IMAGING
20170363853 · 2017-12-21
Assignee
Inventors
Cpc classification
G06T3/4084
PHYSICS
G06T7/521
PHYSICS
G02B21/367
PHYSICS
G06T11/006
PHYSICS
H04N23/951
ELECTRICITY
International classification
G02B21/36
PHYSICS
G06T7/521
PHYSICS
Abstract
A method of generating an image of a substantially translucent specimen includes illuminating and imaging the specimen based on light filtered by an optical element. A plurality of variably-illuminated relatively low resolution intensity images of the specimen are acquired for which content of the images corresponds to partially overlapping regions in frequency space. A relatively higher resolution image of the specimen is then reconstructed by iteratively updating overlapping regions of the relatively higher resolution image in Fourier space with the plurality of variably-illuminated, relatively lower resolution intensity images. The iterative updating processes the plurality of relatively lower resolution intensity images in a first sequence which progresses from a centre region of the relatively higher resolution image in increasing spatial frequency followed by a second sequence which progresses towards the centre region in decreasing spatial frequency.
Claims
1. A method of generating an image of a substantially translucent specimen, the method comprising: (a) illuminating and imaging the specimen based on light filtered by an optical element; (b) acquiring a plurality of relatively lower resolution intensity images of the specimen for which content of the images corresponds to partially overlapping regions in frequency space; and (c) reconstructing a relatively higher resolution image of the specimen by iteratively updating overlapping regions of the relatively higher resolution image in Fourier space with the plurality of relatively lower resolution intensity images, wherein said iterative updating processes the plurality of relatively lower resolution intensity images in a first sequence which progresses from a centre region of the relatively higher resolution image in increasing spatial frequency followed by a second sequence which progresses towards the centre region in decreasing spatial frequency.
2. A method according to claim 1, comprising using a variable illuminator to control the spatial frequency associated with the relatively lower resolution intensity images according to angles of illumination between individual light sources of the variable illuminator and the specimen.
3. A method according to claim 1, comprising using a scanning aperture to control the spatial frequency associated with the intensity images.
4. A method according to claim 1, comprising using a spatial light modulator to control the spatial frequency associated with the intensity images.
5. A method according to claim 1, wherein said first sequence starts with one said lower resolution image corresponding to a spatial frequency that is at or near to zero.
6. A method according to claim 1, wherein said second sequence ends with one said lower resolution image corresponding to a spatial frequency that is at or near to zero.
7. A method according to claim 1, wherein the iterative updating concludes towards the centre region such that the second sequence is the final sequence.
8. A method according to claim 1, wherein said first sequence is selected in order of increasing maximum modulus of spatial frequency, and then in an order according to an angle of the radial spatial frequency.
9. A method according to claim 2, wherein the order according to an angle of progression is one of an increasing or decreasing angle around an optical axis in a plane of illumination.
10. A method according to claim 8, wherein said second sequence is selected in order of decreasing maximum modulus of spatial frequency, and then in an order according to an angle of the radial spatial frequency.
11. A method according to claim 8, wherein said second sequence is selected in order of decreasing radial spatial frequency, and then in order of one of increasing or decreasing angle of the radial spatial frequency.
12. A method according to claim 10, wherein the order according to the angle of progression is one of an increasing or decreasing angle of the radial spatial frequency.
13. A method according to claim 1, wherein said first sequence is selected in order of increasing radial spatial frequency, and then in order of one of increasing or decreasing angle of the radial spatial frequency.
14. A method according to claim 13, wherein said second sequence is selected in order of decreasing radial spatial frequency, and then in order of one of increasing or decreasing angle of the radial spatial frequency.
15. A method according to claim 2, wherein the variable illuminator comprises positions of illumination on a plane perpendicular to an optical axis of imaging and configured to illuminate the specimen from a plurality of angles of illumination, wherein at least one of: (a) positions of illumination on the plane map to two-dimensional (2D) spatial frequencies in a Fourier reconstruction space that are approximately evenly spaced; (b) positions of illumination on the plane map to 2D spatial frequencies in a Fourier reconstruction space such that the density is greater towards the spatial frequency corresponding to the DC term of the Fourier reconstruction; (c) positions of illumination on the plane map to 2D spatial frequencies in a Fourier reconstruction space such that the density is greater towards the spatial frequency corresponding to the DC term of the Fourier reconstruction according to a power law; (d) positions of illumination on the plane map to 2D spatial frequencies in a Fourier reconstruction space such that the density is greater towards the spatial frequency corresponding to the DC term of the Fourier reconstruction by the illumination angles being arranged with a substantially regular pattern in a polar coordinate system defined by a radial coordinate that depends on the magnitude of the angle relative to an optical axis and an angular coordinate corresponding to the orientation of the angle relative to the optical axis; (e) a density of positions of illumination drops substantially to zero outside a circular region; (f) positions of illumination on a plane perpendicular to the optical axis are spaced evenly on concentric circles such that the number of angular locations selected around each circle increases monotonically with the radius of the circle; and (g) positions of illumination are defined one or more spiral arrangements.
16. A method according to claim 1, wherein the illuminating and imaging comprises scanning an aperture in a plane perpendicular to an optical axis of imaging, wherein at least one of: (a) positions of the scanning aperture map to 2D spatial frequencies in a Fourier reconstruction space such that the density is greater towards the spatial frequency corresponding to the DC term of the Fourier reconstruction; (b) positions of aperture map to 2D spatial frequencies in a Fourier reconstruction space such that the density is greater towards the spatial frequency corresponding to the DC term of the Fourier reconstruction according to a power law; (c) positions of aperture map to 2D spatial frequencies in a Fourier reconstruction space being arranged with a substantially regular pattern in a polar coordinate system defined by a radial coordinate that depends on a modulus of spatial frequency, and an angular coordinate which depends on the angle of the radial spatial frequency; (d) a density of positions of the scanning aperture drops substantially to zero outside a circular region; (e) scanning aperture positions are spaced evenly on concentric circles such that the number of angular locations selected around each circle increases monotonically with the radius of the circle; and (f) scanning aperture positions are defined one or more spiral arrangements.
17. Apparatus for generating an image of a substantially translucent specimen, comprising: an imaging system for illuminating and imaging the specimen based on light filtered by an optical element and acquiring a plurality of relatively lower resolution intensity images of the specimen for which content of the images corresponds to partially overlapping regions in frequency space; and a processor system configured to reconstruct a relatively higher resolution image of the specimen by iteratively updating overlapping regions of the relatively higher resolution image in Fourier space with the plurality of relatively lower resolution intensity images, wherein said iterative updating processes the plurality of relatively lower resolution intensity images in a first sequence which progresses from a centre region of the relatively higher resolution image in increasing spatial frequency followed by a second sequence which progresses towards the centre region in decreasing spatial frequency.
18. Apparatus according to claim 17, comprising at least one of: (i) a variable illuminator to control the spatial frequency associated with the relatively lower resolution intensity images according to angles of illumination between individual light sources of the variable illuminator and the specimen; (ii) a scanning aperture to control the spatial frequency associated with the intensity images; and (iii) a spatial light modulator to control the spatial frequency associated with the intensity images.
19. A non-transitory computer readable storage medium having a program recorded thereon, the program being executable by a processor for generating an image of a substantially translucent specimen, the program comprising: code for operative for illuminating and imaging the specimen based on light filtered by an optical element to acquire acquiring a plurality of relatively lower resolution intensity images of the specimen for which content of the images corresponds to partially overlapping regions in frequency space; and code for reconstructing a relatively higher resolution image of the specimen by iteratively updating overlapping regions of the relatively higher resolution image in Fourier space with the plurality of relatively lower resolution intensity images, wherein said iterative updating processes the plurality of relatively lower resolution intensity images in a first sequence which progresses from a centre region of the relatively higher resolution image in increasing spatial frequency followed by a second sequence which progresses towards the centre region in decreasing spatial frequency.
20. A non-transitory computer readable storage medium according to claim 19 wherein the code for reconstructing executable such that, at least one of: (i) said first sequence starts with one said lower resolution image corresponding to a spatial frequency that is at or near to zero; (ii) said second sequence ends with one said lower resolution image corresponding to a spatial frequency that is at or near to zero; (iii) the iterative updating concludes towards the centre region such that the second sequence is the final sequence; (iv) said first sequence is selected in order of increasing maximum modulus of spatial frequency, and then in an order according to an angle of progression from the centre region.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0028] At least one embodiment of the invention will now be described with reference to the following drawings, in which:
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
DETAILED DESCRIPTION INCLUDING BEST MODE
Context
[0048]
[0049] A variable illumination system (illuminator) 108 is positioned in association with the microscope 101 so that the specimen 102 may be illuminated by coherent or partially coherent light incident at different angles. The illuminator 108 typically includes small light emitters 112 arranged at distance from the specimen 102, the distance being large compared to the size of the emitters and also compared to the size of the specimen 102. With such an arrangement, the light emitters 112 act somewhat like point sources, and the light from the emitters 112 approximates plane waves at the specimen 102. An alternate configuration may use larger light emitters and a lens to focus the light to a plane wave. The specimen 102 is typically substantially translucent such that the illuminating light can pass through the specimen 102 and be focussed by the lens 109 of the microscope 101 for detection by the camera 103. The arrangement of the microscope 101, the lens 109 and camera 103 represent a detector that forms an optical axis and is configured to capture or acquire images of the specimen 102 subject to the variable illumination afforded by the illuminator 108.
[0050] The microscope 101 forms an image of the specimen 102 on a sensor in the camera 103 by means of an optical system. The optical system may be based on an optical element that may include an objective lens 109 with low numerical aperture (NA), or some other arrangement. The camera 103 captures one or more images 104 corresponding to each illumination configuration. Multiple images may be captured at different stage positions and/or different colours of illumination. The arrangement provides for the imaging of the specimen 102, including the capture and provision of multiple images of the specimen 102 to the computer 105.
[0051] The captured images 104, also referred to as relatively low or lower resolution images, are intensity images that may be greyscale images or colour images, depending on the sensor and illumination. The images 104 are passed to a computer system 105 which can either start processing the images immediately or store them in temporary storage 106 for later processing. As part of the processing, the computer 105 generates a relatively high or higher resolution image 110 corresponding to one or more regions of the specimen 102. The higher resolution image may be reproduced upon a display device 107. As illustrated, the computer 105 may be configured to control operation of the individual light emitters 112 of the illuminator 108 via a control line 116. Also, the computer 105 may be configured to control movement of the stage 114, and thus the specimen 102, via a control line 118. A further control line 120 may be used by which the computer 105 may control the camera 103 for capture of the images 104.
[0052] The transverse optical resolution of the microscope may be estimated based on the optical configuration of the microscope and is related to the point spread function of the microscope. A standard approximation to this resolution in air is given by:
where NA is the numerical aperture, and λ is the wavelength of light. A conventional slide scanner might use an air immersion objective lens with an NA of 0.7. At a wavelength of 500 nm, the estimated resolution is 0.4 μm. A typical FPM system would use a lower NA of the order of 0.08 for which the estimated resolution drops to 4 μm.
[0053] The numerical aperture of a lens defines a half-angle, θ.sub.H, of the maximum cone of light that can enter or exit the lens. In air, this is defined by:
θ.sub.H=arcsin(NA), (2)
in terms of which the full acceptance angle of the lens can be expressed as θ.sub.F=2θ.sub.H.
[0054] The specimen 102 being observed may be a biological specimen such as a histology slide consisting of a tissue fixed in a substrate and stained to highlight specific features. Such specimens are substantially translucent. Such a slide may include a variety of biological features on a wide range of scales. The features in a given slide depend on the specific tissue sample and stain used to create the histology slide. The dimensions of the specimen on the slide may be of the order of 10 mm×10 mm or larger. If the transverse resolution of a virtual slide was selected as 0.4 μm, each layer would consist of at least 25,000 by 25,000 pixels.
Computer Implementation
[0055]
[0056] As seen in
[0057] The computer module 1801 typically includes at least one processor unit 1805, and a memory unit 1806. For example, the memory unit 1806 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 1801 also includes an number of input/output (I/O) interfaces including: an audio-video interface 1807 that couples to the video display 1814, loudspeakers 1817 and microphone 1880; an I/O interface 1813 that couples to the keyboard 1802, mouse 1803, scanner 1826, camera 103, the illuminator 108, the stage 114, and optionally a joystick or other human interface device (not illustrated); and an interface 1808 for the external modem 1816 and printer 1815. In some implementations, the modem 1816 may be incorporated within the computer module 1801, for example within the interface 1808. The computer module 1801 also has a local network interface 1811, which permits coupling of the computer system 1800 via a connection 1823 to a local-area communications network 1822, known as a Local Area Network (LAN). As illustrated in
[0058] The I/O interfaces 1808 and 1813 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1809 are provided and typically include a hard disk drive (HDD) 1810. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1812 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks 1825 (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1800. In the arrangement illustrated, the data storage 106 of
[0059] The components 1805 to 1813 of the computer module 1801 typically communicate via an interconnected bus 1804 and in a manner that results in a conventional mode of operation of the computer system 1800 known to those in the relevant art. For example, the processor 1805 is coupled to the system bus 1804 using a connection 1818. Likewise, the memory 1806 and optical disk drive 1812 are coupled to the system bus 1804 by connections 1819. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple Mac™ or a like computer systems.
[0060] The methods of image acquisition to be described may be implemented using the computer system 1800 wherein the processes of
[0061] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 1800 from the computer readable medium, and then executed by the computer system 1800. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 1800 preferably effects an advantageous apparatus for ptychographic imaging.
[0062] The software 1833 is typically stored in the HDD 1810 or the memory 1806. The software is loaded into the computer system 1800 from a computer readable medium, and executed by the computer system 1800. Thus, for example, the software 1833 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1825 that is read by the optical disk drive 1812. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 1800 preferably effects an apparatus for ptychographic imaging.
[0063] In some instances, the application programs 1833 may be supplied to the user encoded on one or more CD-ROMs 1825 and read via the corresponding drive 1812, or alternatively may be read by the user from the networks 1820 or 1822. Still further, the software can also be loaded into the computer system 1800 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1800 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray Disc™, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1801. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1801 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
[0064] The second part of the application programs 1833 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1814. Through manipulation of typically the keyboard 1802 and the mouse 1803, a user of the computer system 1800 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1817 and user voice commands input via the microphone 1880.
[0065]
[0066] When the computer module 1801 is initially powered up, a power-on self-test (POST) program 1850 executes. The POST program 1850 is typically stored in a ROM 1849 of the semiconductor memory 1806 of
[0067] The operating system 1853 manages the memory 1834 (1809, 1806) to ensure that each process or application running on the computer module 1801 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 1800 of
[0068] As shown in
[0069] The application program 1833 includes a sequence of instructions 1831 that may include conditional branch and loop instructions. The program 1833 may also include data 1832 which is used in execution of the program 1833. The instructions 1831 and the data 1832 are stored in memory locations 1828, 1829, 1830 and 1835, 1836, 1837, respectively. Depending upon the relative size of the instructions 1831 and the memory locations 1828-1830, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1830. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1828 and 1829.
[0070] In general, the processor 1805 is given a set of instructions which are executed therein. The processor 1805 waits for a subsequent input, to which the processor 1805 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 1802, 1803, data received from an external source across one of the networks 1820, 1822, data retrieved from one of the storage devices 1806, 1809 or data retrieved from a storage medium 1825 inserted into the corresponding reader 1812, all depicted in
[0071] The disclosed ptychographic imaging arrangements use input variables 1854, which are stored in the memory 1834 in corresponding memory locations 1855, 1856, 1857. The arrangements produce output variables 1861, which are stored in the memory 1834 in corresponding memory locations 1862, 1863, 1864. Intermediate variables 1858 may be stored in memory locations 1859, 1860, 1866 and 1867.
[0072] Referring to the processor 1805 of
[0076] Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 1839 stores or writes a value to a memory location 1832.
[0077] Each step or sub-process in the processes of
Overview
[0078] The variable illumination system 108 may be formed using a set of LEDs arranged on a flat substrate, referred to as an LED matrix. The LEDs may be monochromatic or multi-wavelength, for example they may illuminate at 3 separate wavelengths corresponding to red, green and blue light, or they may illuminate at an alternative set of wavelengths appropriate to viewing specific features of the specimen. The appropriate spacing of the LEDs on the substrate depends on the microscope optics and the distance from the specimen 102 to the illumination plane, being that plane defined by the flat substrate supporting the emitters 112. Each emitter 112, operating as a point light source, establishes a corresponding angle of illumination 495 to the specimen 102. Where the distance between the light source 112 and the specimen 102 is sufficiently large, the light emitted from the light source 112 approximates a plane wave. In general, the spacing of the LEDs on the substrate should be chosen so that the difference in angle of illumination arriving from a pair of neighbouring LEDs is less than the acceptance angle θ.sub.F defined by the numerical aperture of the lens 109 according to Equation 2 above.
[0079] An exemplary illuminator 108 is formed of a set of LEDs forming a matrix capable of illumination at 632 nm, 532 nm and 472 nm with a spacing of approximately 4 mm. The LED matrix is placed 8 cm below the sample stage 114, and cooperates with an optical system with NA of 0.08 and magnification of 2×, and a sensor pixel size of 5.5 μm.
[0080] Alternative variable illumination systems to the LED matrix may be used. For example, various display technologies capable of emitting light from particular locations (pixels) could be used, such as LCD, plasma, OLED, SED, CRT or other display technology. Also, the variable illumination may be achieved by mechanically moving a light source such as an LED to a variety of locations, or even by a combination of mechanical motion, multiple sources, and display technology.
[0081]
[0082] The variable illumination system 108 is not constrained to be flat. The illumination system 108 may take some non-flat geometry, such as the hemisphere 410 illustrated in
[0083] A normalised offset vector may be formed for the offset vector of the i.sup.th angled illumination in (dx.sub.i, dy.sub.i, dz.sub.i) by dividing by the distance from the specimen point to the point on the plane corresponding to the illumination (i.e. from 435 to 420, or from 335 to 330):
[0084] Using this approach, it is thereby possible to define the wavevector of the i.sup.th angled illumination as the product of the normalised offset vector for this illumination and the wavenumber of illumination in vacuum, k.sub.0=2π/λ:
(k.sub.x.sup.i,k.sub.y.sup.i,k.sub.z.sup.i)=k.sub.0(.sub.l,
.sub.l,
.sub.l) (4)
[0085] The projected positions (460 of
[0086] It is helpful to consider aspects of the optical system in Fourier space. Two-dimensional (2D) Fourier space is a space defined by a 2D Fourier transform of the 2D real space in which the captured images are formed, or the transverse spatial properties of the specimen may be defined. The coordinates in this Fourier space are the transverse wavevectors (k.sub.x, k.sub.y). The transverse wavevectors represent the spatial frequency of the image, with low frequencies (at or near zero) being toward the centre of the coordinate representation (e.g.
[0087] Each lower resolution capture image is associated with a region in Fourier space defined by the optical transfer function of the optical element and also by the angle of illumination set by the variable illuminator. For the case where the optical element is an objective lens, the region in Fourier space can be approximated as a circle of radius r.sub.k defined by the product of the wavenumber of illumination in vacuum, k.sub.0=2π/λ, and the numerical aperture:
r.sub.k=k.sub.0NA. (5)
[0088] The position of the circular region is offset according to the angle of illumination. For the i.sup.th illumination angle, the offset is defined by the transverse components of the wavevector (k.sub.x.sup.i, k.sub.y.sup.i). This is illustrated in
[0089] In an alternative mode of Fourier Ptychographic imaging, lower resolution capture images may be obtained using a shifted or scanning aperture (also referred to as aperture-scanning) rather than angled illumination. In this arrangement, the sample is illuminated using a single plane wave incident approximately along the optical axis. The aperture is set in the Fourier plane of the imaging system and the aperture moves within this plane, perpendicular to the optical axis. This kind of scanning aperture may be achieved using a high NA lens with an additional small scanning aperture that restricts the light passing through the optical system. The aperture in such a scanning aperture system may be considered as selecting a region in Fourier space represented by the dashed circle in
[0090] A general overview of a process 500 that can be used to generate a higher resolution image of a specimen by Fourier Ptychographic imaging is shown in
[0091] In the process 500, at step 510, a specimen may optionally be loaded onto the microscope stage 114. Such loading may be automated. In any event, a specimen 102 is required to be positioned for imaging. Next, at step 520, the specimen may be moved to be positioned such that it is within the field of view of the microscope 101 around its focal plane. Such movement is optional and where implemented may be manual, or automated with the stage under control of the computer 1801. Next, with a specimen appropriately positioned, steps 540 to 560 define a loop structure for capturing and storing a set of images of the specimen for a predefined set of illumination configurations. In general this will be achieved by illuminating the specimen from a specific position or at a specific angle. In the case that the variable illuminator 108 is formed of a set of LEDs such as an LED matrix, this may be achieved by switching on each individual LED in turn. The order of illumination may be arbitrary, although it is preferable to capture images in the order in which they will be processed (which may be in order of increasing angle of illumination). This minimises the delay before processing of the captured images can begin if the processing is to be started prior to the completion of the image capture. The predetermined set of illumination configurations that may be used will be discussed further with reference to
[0092] Step 550 sets the next appropriate illumination configuration, then at step 560 a lower resolution image 104 is captured on the camera 103 and stored on data storage 106 (1810). The image 104 may be a high dynamic range image, for example a high dynamic range image formed from one or more images captured over different exposures times. Appropriate exposure times can be selected based on the properties of the illumination configuration. For example, if the variable illuminator is an LED matrix, these properties may include the illumination strength of the LED switched on in the current configuration.
[0093] Step 570 checks if all the illumination configurations have been selected, and if not processing returns to step 540 for capture at the next configuration. Otherwise when all desired configurations have been captures, the method 500 continues to step 580. At step 580 the processor 1805 operates to generate a higher resolution image from the set of lower resolution captured images 104. This step will be described in further detail with respect to
[0094] A method 600, used at step 580 to generate a higher resolution image 110 from the set of lower resolution captured images 104 will now be described in further detail below with reference to
[0095] Method 600 starts at step 610 where the processor 1805 retrieves a set of captured images 104 of the specimen 102 and partitions each of the captured images 104.
[0096] The overlapping regions may take different sizes over the capture images 104 in order for the partitioning to cover the field of view exactly. Alternatively, the overlapping regions may be fixed in which case the partitioning may omit a small region around the boundary of the capture images 710. The size of each partition and the total number of partitions may be varied to optimise the overall performance of the system in terms of memory use and processing time. A set of partition images is formed corresponding to the geometry of a partition region applied to each of the set of lower resolution capture images. For example, the partition 750 may be selected from each capture image to form one such set of partitions.
[0097] Steps 620 to 640 define a loop structure that processes the sets of partitions of the lower resolution images in turn. The sets of partitions may be processed in parallel for faster throughput. Step 620 select the next set of lower resolution partitions of the capture images. Step 630 then generates a higher resolution partition image from the set of partition images. Each higher resolution partition image may be temporarily stored in memory 1806 or 1810. This step will be described in further detail with respect to
[0098] At step 650, the set of higher resolution partition images are combined to form a single higher resolution image 110. A suitable method of combining the images may be understood with reference to
[0099] Method 800, used at step 630 to generate a higher resolution partition image from set of lower resolution partition images, will now be described in further detail below with reference to
[0100] First at step 810, a higher resolution partition image is initialised by the processor 1805. The image is defined in Fourier space, with a pixel size that is preferably the same as that of the lower resolution capture images transformed to Fourier space by a 2D Fourier transform. It is noted that each pixel of the image stores a complex value with a real and imaginary component. The initialised image should be large enough to contain all of the Fourier space regions corresponding to the variably illuminated lower resolution capture images, such as the region illustrated by the dashed circle in
[0101] It is noted that in alternative implementations, the higher resolution partition image may be generated with a size that can dynamically grow to include each successive Fourier space region as the corresponding lower resolution capture image is processed.
[0102] Once the higher resolution partition image has been initialised in step 810, steps 820 to 870 loop over a number of iterations. The iterative updating is used to resolve the underlying phase of the image data to reduce errors in the reconstructed high-resolution images. The number of iterations may be fixed, preferably somewhere between 4 and 15, or it may be set dynamically by checking a convergence criteria for the reconstruction algorithm.
[0103] Each iteration starts at step 820, then step 830 determines an appropriate order for processing the set of partition images of the lower resolution capture images for the current iteration. The order may be defined by indexing each lower resolution capture image according to the order of capture. For a total of N capture images, the indices take the range i=1, . . . N.
[0104] A number of suitable orderings may be defined based on the set of transverse wavevectors (k.sub.x.sup.i, k.sub.y.sup.i) corresponding to the image captures. The transverse wavevectors may correspond to the angle of illumination, or the position of a scanning, or otherwise modifiable aperture, such as spatial light modulator (LCD mask). Transverse wavevectors corresponding to a number of different configurations are illustrated in
[0105] A square-ascending order, as known and used, is defined based on concentric squares around the DC point (k.sub.x=k.sub.y=0). Capture images corresponding to transverse wavevectors on smaller squares are processed prior to those on larger squares. In terms of the transverse wavevectors this corresponds to processing images in order of increasing value of the maximum of the modulus of the transverse wavevectors, which may be expressed as k.sub.sq=max(|k.sub.x|, |k.sub.y|). If more than one wavevector is on the same square (i.e. has the same value of k.sub.sq) then those wavevectors are ordered according to the angle of the transverse wavevector relative to a line from the origin such as the x- or y-axis. For example, capture images on the same concentric square may be ordered according to increasing or decreasing angle around the z-axis relative to the x-axis, as seen in
[0106] A preferred implementation makes use of processing in both ascending and descending directions.
[0107] For a square lattice arrangement of transverse wavevectors, the ascending-square sort order is illustrated in
[0108] An ascending-radial processing order may be defined in a similar fashion to the ascending-square processing order but based on concentric circles around the DC point rather than concentric squares. In terms of the transverse wavevectors this corresponds to processing images in order of increasing transverse radial wavevector, which may be expressed as k.sub.rad=√{square root over (k.sub.x.sup.2+k.sub.y.sup.2)}. As for the ascending-square order, if more than one wavevector is on the same circle (i.e. has the same value of k.sub.rad) then those wavevectors may be ordered according to the angle of the transverse wavevector around the z-axis relative to a line from the origin such as the x-axis.
[0109] For a concentric radial lattice arrangement of transverse wavevectors, the ascending-radial processing order is illustrated in
[0110] For a spiral lattice arrangement of transverse wavevectors, the ascending-radial processing order is illustrated in
[0111] It is noted that in the illustrations, the ascending-square and descending-square order is shown for a square lattice of transverse wavevectors, and the ascending-radial and descending-radial orders are shown for a concentric lattice and spiral arrangement. The square and radial orders are easier to visualise when the underlying lattice and processing order selection are based on similar geometry. However either processing order may be used for any lattice.
[0112] The above described two types of processing order: ascending and descending. An ascending processing order is typically started near the centre of the lattice, or equivalently a small transverse wavevector, and proceeds outwards, while a descending processing order is typically starting near the outside of the lattice, or equivalently an large transverse wavevector, and proceeds inwards. Variants of the ascending-square and ascending-radial processing may be defined that follow the basic pattern of an ascending order through most of the sequence. Similarly, variants of the descending-square and descending-radial ordering may be defined that follow the basic pattern of a descending processing order through most of the sequence. These variants may be defined based on a rule defined in terms of the positions of LEDs rather than transverse wavevectors. The selected processing order may be defined differently for different partitions of the reconstruction image.
[0113] As described above, the processing order may be selected based on the iteration. For example, the first iteration might use an ascending processing order, and the final iteration might use a descending processing order. In between the first and last order it may be advantageous to use ascending then descending on subsequent iterations. For example, an even number of iterations may be used, with the first and subsequent odd iterations using an ascending processing order, and the second and all other even iterations using a descending processing order.
[0114] A typical sequence based on the ascending-square and descending-square processing order might be a total of 10 iterations for which the 1.sup.st, 3.sup.rd, 5.sup.th, 7.sup.th and 9.sup.th iterations use an ascending-square order and the 2.sup.nd, 4.sup.th, 6.sup.th, 8.sup.th, and 10.sup.th iterations use a descending-square order. A typical sequence based on the ascending-radial and descending-radial processing order might be a total of 10 iterations for which the 1.sup.st, 3.sup.rd, 5.sup.th, 7.sup.th and 9.sup.th iterations use an ascending-radial order and the 2.sup.nd, 4.sup.th, 6.sup.th, 8.sup.th, and 10.sup.th iterations use a descending-radial processing order. Alternative sequences may combine different processing orders for different iterations and/or different partitions.
[0115] The order for the first iteration may match the illumination configuration order selected at step 540 so that the reconstruction algorithm performed at step 580 may start as soon as the first image is captured, and before all of the lower resolution images are captured at step 560.
[0116] Next, steps 840 to 860 step through the images of the ordered set of partition images of the lower resolution capture images from step 830. Step 840 selects the next image from the set, then step 850 updates the higher resolution partition image based on the currently selected lower resolution partition image of the set. This step will be described in further detail with respect to
[0117] The final step 880 of method 800 is to perform an inverse 2D Fourier transform on the higher resolution partition image to transform it back to real space.
[0118] Method 900, used at step 850 to update the higher resolution partition image based on a single lower resolution partition image will now be described in further detail below with reference to
[0119] In step 910, the processor 1805 selects a spectral region in the higher resolution partition image corresponding to the currently selected partition image from a lower resolution capture. This is achieved as illustrated in
[0120] It is noted that if the variable illuminator 108 does not illuminate with plane waves at the specimen 102, then the angle of incidence for a given illumination configuration may vary across the specimen, and therefore between different partitions. This means that the set of spectral regions corresponding to a single illumination configuration may be different for different partitions.
[0121] Optionally, the signal in the spectral region may be modified in order to handle aberrations in the optics. For example, the spectral signal may be multiplied by a phase function to handle certain pupil aberrations. The phase function may be determined through a calibration method, for example by optimising a convergence metric (formed when performing the generation of a higher resolution image for a test specimen) with respect to some parameters of the pupil aberration function. The pupil function may vary over different partitions as a result due to slight differences in the local angle of incident illumination over the field of view.
[0122] Next, at step 920, the image data from the spectral region is transformed by the processor 1805 to a real space image at equivalent resolution to the lower resolution capture image partition. The spectral region may be zero-padded prior to transforming with the inverse 2D Fourier transform. The amplitude of the real space image is then set to match the amplitude of the equivalent (current) lower resolution partition image at step 930. The complex phase of the real space image is not altered at this step. The real space image is then Fourier transformed at step 940 to give a spectral image. Finally, at step 950, the signal in the spectral region of the higher resolution partition image selected at step 910 is replaced with the corresponding signal from the spectral region in the spectral image formed at step 940. It is noted that in order to handle boundary related artefacts, it may be preferable to replace a subset of the spectral region that does not include any boundary pixels. If the signal in the spectral region was modified to handle aberrations at step 910, then a reverse modification should be performed as part of step 950 prior to replacing the region of the higher resolution partition image at this stage.
First Exemplary Implementation
[0123]
[0124]
[0125] A further modification may be made by applying a transform to the desired set of transverse wavevectors.
[0126] A variety of suitable transforms exist, some examples being defined in terms of the radial coordinates, (k.sub.r, k.sub.θ), of the transverse wavevector which are defined such that k.sub.x+jk.sub.y=k.sub.re.sup.jk.sup.
k.sub.r=√{square root over ((k.sub.x).sup.2+(k.sub.y).sup.2)},
k.sub.θ=arctan 2(k.sub.x,k.sub.y), (6)
[0127] A suitable transform is to scale the radial component of the transverse wavevector according to a power law, for example:
where a suitable value for the parameter γ is 1.15 if the spacing of the light sources corresponds to a fraction of 0.55 of the acceptance angle θ.sub.F. The Cartesian transverse wavevectors are then simply given by k.sub.x=k.sub.r cos θ and k.sub.y=k.sub.r sin θ. Other suitable transforms may be defined in terms of simple nonlinear functional forms such as polynomial, rational, trigonometric, logarithmic, or combinations of these. According to Equations (6) and (7), positions of illumination on the plane (e.g. 11E, 12E, 14E, 15E, 16E) map to 2D wavevectors in a Fourier reconstruction space such that the density is greater towards the wavevector corresponding to the DC term of the Fourier reconstruction (e.g. respectively 11F, 12F, 14F, 15F, 16F). The density of light sources increases in lower radial wavevectors in the central region of Fourier space. This is seen for example in
[0128] In general, a set of illumination configurations corresponding to
Second Exemplary Implementation
[0129]
[0130] The configuration illustrated in
Third Exemplary Implementation
[0131]
[0132]
where the indices take the ranges i=0, . . . , N.sub.r and j=0, . . . , max(0, iN.sub.θ−1), and θ.sub.0,0 takes the value zero. The number of rings is defined by N.sub.r and the number of additional light sources per concentric ring is given by N.sub.θ. For the example in
[0133]
r.sub.i=S.sub.r√{square root over (i)},
θ.sub.i=s.sub.θ√{square root over (i)}, (9)
for i=0, . . . , (N−1), where N is the total number of light sources. Suitable parameters for the design are given by S.sub.r corresponding to a fraction of 0.325 of the acceptance angle θ.sub.F and S.sub.θ=0.3.
[0134] As mentioned above, the concentric and spiral arrangements form substantially regular patterns, when defined in polar coordinates. In the concentric arrangement, the light sources are equally spaced in angle on each concentric ring. In the spiral arrangement, the angle is proportional to square root of the index of the light source.
[0135] Other arrangements are possible based on these models. For example, the concentric arrangement may be modified such that the number of light sources on each concentric ring in the concentric arrangement varies in a nonlinear manner, or in irregular steps, while maintaining the equal angular spacing on each ring. Alternatively, a pattern may be formed by combining a number of discrete polar arrangements together with different parameter values (preferably without including multiple light sources at the centre). Interesting arrangements useful for Fourier ptychography may be formed from a set of spirals placed at different angles to each other to achieve improved accuracy or efficiency.
[0136]
[0137]
[0138]
[0139] A further modification may be made by applying a transform to the desired set of transverse wavevectors.
[0140] It is noted that a subset of the concentric or spiral arrangements may be selected that are non-circular in its extent. For example, the set of light sources falling within a square geometry may be selected.
[0141]
[0142]
[0143]
[0144] A further modification may be made by applying a transform to the desired set of transverse wavevectors.
Fourth Exemplary Implementation
[0145] In some applications, it may be advantageous to switch on multiple light sources at one time and capture lower resolution images on the camera 103. The computer processing required to generate the higher resolution image would be different in this case, owing to a need for additional processing from a non-adjacent sources and hence angles, however similar advantages over prior art variable illumination arrangements may be obtained.
Advantage
[0146] Estimates of the comparative performance of the above arrangements may be quantified using simulations of an FPM system with different variable illumination arrangements corresponding to different sets of illumination configurations. A large image of a histopathology slide may be used to simulate an infinitesimally thin specimen, and it is assumed that the specimen is in focus so that the effects of depth are small and may be ignored. Each low resolution capture image may be synthesised by selecting a small aperture in Fourier space corresponding to a low NA lens at a wavevector offset position corresponding to the angle of illumination. The low NA lens acts as a low resolution optical element to filter light in the imaging system. Spatial padding and a suitable windowing function may be used in the synthesis of these images to avoid artefacts at the image boundaries. The Tukey and Planck-taper window functions are suitable window functions for this purpose. The synthesised capture image is selected from the region at the centre of the synthesised image for which the window function is flat and takes the value 1.
[0147] The capture images are processed according to method 600 (580) for a fixed number of iterations and the reconstructed image may be compared to the true image. Metrics such as mean square error and structural similarity (SSIM) are suitable for the comparison.
[0148]
[0149] It is possible to estimate the reduction in the number of light sources required to achieve a given score using the interpolation data shown in
TABLE-US-00001 TABLE 1 Estimated required number of light sources and % reduction to achieve given SSIM for FPM simulation using different reconstruction algorithms. Configuration AS AR ADS ADR Number of light sources 196 193 166 164 to achieve SSIM = 0.892 % Change relative to — −1.5% −15% −16% arrangement AS
[0150] It is noted that the advantage estimates described above with reference to
[0151] Furthermore, it is noted that it the above variable illuminator arrangements may be substantially achieved using an LED matrix with a very dense arrangement of LEDs on a regular grid. For each LED position in the design, an LED from the LED matrix may be selected that is close to the position of the corresponding light source in the variable illuminator arrangement. This essentially uses a subsampling of the LED matrix light sources to illuminate the specimen to thereby use that subset of sources that are close to the desired position in the illuminator arrangement.
INDUSTRIAL APPLICABILITY
[0152] The arrangements described are examples of apparatus for Fourier ptychographic imaging and are applicable to the computer and data processing industries, and particularly for the microscopic inspection of matter, including biological matter. For example, specific arrangements according to the present disclosure provide for reducing the number of light sources to achieve a similar imaging effect as prior arrangements, or to provide improved performance using comparable numbers of light sources.
[0153] The arrangements disclosed, particularly through the control of the illuminator 108 (via 118) and the camera 103 (via 120) provide for the computer 105, when appropriately programmed, to implement the Fourier ptychographic imaging system. More specifically, the application program 1833 can be configured to control the illuminator and camera to cause the capture of the images 104 and then to process the images 104 as described to form a desired (higher resolution) image of the specimen.
[0154] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.