System and method for phase retrieval in lensless imaging

09970891 ยท 2018-05-15

Assignee

Inventors

Cpc classification

International classification

Abstract

A method and system for use in reconstruction and retrieval of phase information associated with a two-dimensional diffractive response are presented. The method comprising: providing (75) input data indicative of one or more diffractive patterns corresponding to diffractive responses from one or more objects (50). Dividing (130) said input data into a plurality of one-dimensional slices and determining (140) one-dimensional phase data for at least some of said one-dimensional slices. Tailoring (150) the reconstructed phase data of said one-dimensional slices to form a two-dimensional phase solution. The two-dimensional phase solution is defined by phase shifts of said reconstructed one-dimensional phase data of said one-dimensional slices. The two-dimensional phase solution thus enables obtaining two-dimensional reconstructed phase data suitable for reconstruction of image data (250).

Claims

1. A method for use in reconstruction of phase information associated with a two-dimensional diffractive response of one or more objects, the method comprising: providing input data indicative of at least three intensity diffractive patterns associated with one or more diffractive responses from said one or more objects, such that first and second diffractive patterns are independent of one another and the third diffractive pattern includes an interference of the first and the second diffractive patterns; dividing each of at least said independent and interfering intensity diffractive patterns into plurality of one-dimensional slices to thereby form plurality of sets of matching slices thereof; and tailoring the one-dimensional phase data; reconstructed from said plurality of matching slices to thereby reconstruct a two-dimensional phase data associated with said diffractive responses, said tailoring of the reconstructed one-dimensional phase data comprising determining phase shifts associated with each of said one-dimensional slices, said reconstructed two-dimensional phase data suitable for reconstruction of image data.

2. The method of claim 1, wherein said one or more diffractive responses comprises diffractive responses of two or more objects, the method further comprising generating said input data based on said diffractive responses, said generating comprising determining cross-correlation and auto-correlation relations between said two or more objects from said diffractive responses to thereby determine intensity diffractive patterns of said objects and interference relations between said two or more objects.

3. The method of claim 1, wherein said at least three diffractive patterns are associated with diffractive response of the one or more objects to at least three illumination channels.

4. The method of claim 3, wherein said at least three diffractive patterns comprise data indicative of at least: a first diffractive pattern generated by a first coherent illumination; a second diffractive pattern generated by a second coherent illumination; and a third diffractive pattern generated by a combination of the first and second coherent illuminations.

5. The method of claim 4, wherein said determining reconstructed one-dimensional phase data for said plurality of one-dimensional slices comprising utilizing a cross term between said at least three diffractive patterns, said cross-term being indicative of a relative phase difference between the first and second diffractive patterns.

6. The method of claim 1, wherein said determining said reconstructed one-dimensional phase data for each of said plurality of one-dimensional slices comprises utilizing one-dimensional vectorial phase retrieval technique.

7. The method of claim 1, wherein said input data comprises a single-shot diffractive response of two or more of said objects; and wherein said method further comprises: determining from said single-shot diffractive response, auto-correlation of said two or more objects and cross-correlation between them; identifying intensity diffractive patterns of each of said two or more objects and interference relations between them.

8. A software product, embedded on a non-transitory computer readable medium and carrying computer readable code comprising instructions such that when operated on a computer system cause the computer system to execute method steps for use in reconstruction of phase information associated with a two-dimensional diffractive response of one or more objects, the steps comprising: providing input data indicative of at least three intensity diffractive patterns associated with one or more diffractive responses from said one or more objects, such that first and second diffractive patterns are independent of one another and the third diffractive pattern includes an interference of the first and the second diffractive patterns; dividing each of at least said independent and interfering intensity diffractive patterns into plurality of one-dimensional slices to thereby form plurality of sets of matching slices thereof; and tailoring the one-dimensional phase data; reconstructed from said plurality of matching slices to thereby reconstruct a two-dimensional phase data associated with said diffractive responses, said tailoring of the reconstructed one-dimensional phase data comprising determining phase shifts associated with each of said one-dimensional slices, said reconstructed two-dimensional phase data suitable for reconstruction of image data.

9. A system for use in phase reconstruction, the system comprising a processing utility configured for processing input data being indicative of at least three two-dimensional diffractive patterns associated with one or more diffractive responses of one or more objects, and to determine reconstructed phase data based on said input data; the processing utility comprising: a vector generating module configured to receive said one or more diffractive responses and to generate a corresponding plurality of sets of one-dimensional vectors respectively corresponding to plurality of one-dimensional slices of said at least three two-dimensional diffractive patterns; a one-dimensional phase reconstruction module configured to receive said sets of one-dimensional vectors, and to determine said reconstructed one-dimensional phase data associated with said sets of one-dimensional vectors; and a two-dimensional phase tailoring module configured to perform tailoring by receiving data indicative of reconstructed one-dimensional phase data from said one-dimensional phase reconstruction module and to generate corresponding said reconstructed two-dimensional phase data indicative of reconstructed phase information associated with said input data.

10. The system of claim 9, wherein the processing utility is configured and operable for reconstruction of phase information based on input data, said input data comprises said at least three diffractive patterns associated with said diffractive response of said one or more objects.

11. The system of claim 9, wherein said processing utility comprises a pre-processing module configured and operable to receive and process said input data to generate data indicative of a relative phase difference between at least two diffractive patterns of said one or more two-dimensional diffractive responses.

12. The system of claim 9, wherein the processing utility comprises a pre-processing module configured and operable to receive and process data indicative of at least one diffractive pattern associated with a diffraction response of two or more objects, said pre-processing module being configured to determine input data based on said data indicative of at least one diffractive pattern, said input data comprising diffraction pattern of each of said at least two objects and an interference relation between them.

13. The system of claim 9 configured for lens-less imaging, the system further comprising: at least first and second illumination channels for illuminating an object to be inspected, a detector unit comprising a pixel array for detecting scattered radiation from the object, and a control unit configured and operable for receiving data indicative of detected scattered radiation from said detector unit and processing said data to determine reconstructed image data of said object; said control unit being configured and operable for receiving said input data being indicative of scattered light caused by the first and second illumination channels and for processing said input data to reconstruct image data indicative of the object to be inspected; said processing comprising determining an inverse Fourier transform based on the two-dimensional reconstructed phase data and said detected diffractive patterns to thereby reconstruct image data of the object.

14. The system of claim 13, wherein the control unit being configured and operable to collect input data comprising first second and third diffractive patterns respectively associated with the first, second illumination channels and interference thereof.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) In order to better understand the subject matter that is disclosed herein and to exemplify how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:

(2) FIGS. 1A to 1C illustrate examples of an optical system for Coherent Diffractive Imaging (CDI) suitable for data acquisition for the technique of the present invention;

(3) FIGS. 2A to 2C exemplify the use of a controllable aperture for data acquisition suitable for use with the technique of the present invention;

(4) FIG. 3 illustrates a system for use in phase reconstruction based on data indicative of diffractive patterns according to some embodiments of the present invention;

(5) FIG. 4 illustrates an example of the technique of the present invention in a way of a block diagram;

(6) FIG. 5 exemplifies the technique of the present invention for phase reconstruction; and

(7) FIGS. 6A-6E show three corresponding diffractive patterns provided as input data (FIGS. 6A-6C) and image data for the reconstructed and original images respectively (FIGS. 6D-6E);

(8) FIG. 7A-7E show experimental results of a single-shot phase reconstruction according to embodiments of the present invention, FIG. 7A shows the two object being imaged; FIG. 7B shows a diffractive pattern resulting from lensless imaging of the objects; FIG. 7C shows partial Fourier transformation of the diffractive measurement; FIG. 7D shows amplitude reconstruction of the objects and FIG. 7E shows phase reconstruction of the objects; and

(9) FIGS. 8A-8D show experimental results of a single-shot phase reconstruction of three objects according to embodiments of the present invention FIG. 8A shows the objects; FIG. 8B shows a diffraction measurement of the objects; FIG. 8C shows partial Fourier transformation of the measured intensity and FIG. 8D shows reconstructed image of the objects.

DETAILED DESCRIPTION OF EMBODIMENTS

(10) The present invention provides a system and method for use in reconstruction of phase information based on input data indicative of diffractive measurements. As noted above, such diffractive measurements may be obtained by Coherent Diffractive Imaging (CDI) and/or lensless imaging techniques. It should however be noted that the diffractive patterns may be associated with forward scattering (i.e. transmission of light) and/or backward scattering (i.e. reflection).

(11) According to some embodiments, the input data may include at least three diffractive patterns such that first and second patterns are independent of one another as will be described below, and the third diffractive pattern includes an interference of the first and second diffractive patterns. In this connection reference is made to FIGS. 1A to 1C exemplifying an optical system 10 for lensless imaging of one or more objects 50.

(12) The system includes a light source system 20 configured to provide coherent illumination of the object, and a detector 70 (e.g. pixel array) configured to collect light scattered from the object 50. The detector 70 may generally be connectable to a computerized system (e.g. system 100 for phase reconstruction according to the present invention) for collecting and processing the data. The optical system 10 may generally include additional elements which are not specifically shown here, such as a stand for the inspected object 50, various beam splitters for additional measurements, certain optical elements for directing light onto the sample etc.

(13) It should be noted that the concept and details of lensless imaging or Coherent Diffractive Imaging (CDI) is generally well known in the art and thus will not be described herein in details but to note that lensless imaging utilizes light diffraction from the inspected object to collect information about parameters of the object such as structure and/or shape. The light source 20 (typically a laser light source) produces coherent light beam 25 and directs it towards the object 50. The light beam 25 impinges on the object 50 and is scattered to various directions, e.g. scattered light 55, such that at least some of the scattered light is collected at the far field by the detector 70. The detector 70 is typically a pixel array enabling detection of spatial distribution of the scattered light. It should be noted that the system 10 may be configured to collect back-scattered or forward-scattered radiation and the location of the detector 70 is to be determined accordingly.

(14) At the far field, the scattered light is distributed as Fourier transform of the object. However, a typical detector 70 can provide only intensity information on the collected light, rather than the desired amplitude and phase information. Therefore, in order to reconstruct an image of the inspected object 50, the phase of the collected light should be recovered to enable calculation of inverse Fourier transform and reconstruct the object. Thus, the present invention provides a technique for use in reconstructing of the image data based on diffractive patterns collected by the detector 70 in imaging methods such as CDI.

(15) As indicated above, the technique of the present invention utilizes input data indicative of acquired diffractive pattern. According to some embodiments, the input data includes at least three diffractive patterns collected by lensless imaging (or CDI) of an object to be inspected. Each of the collected diffractive patterns being a two-dimensional map indicating intensity distribution of scattered light as measured by a detector unit (e.g. detector 70). The input data, preferably including the at least three diffractive patterns, is being processed to reconstruct phase data for different rows (or columns) of the diffractive patterns resulting in a set of one-dimensional reconstructed vectors. After reconstruction of phase data for all of the rows (or columns) is complete, the set of one-dimensional reconstructed vectors is tailored together to form a complete two-dimensional map of reconstructed phase information. Based on the reconstructed phase information, inverse Fourier transform of the diffractive image data may be calculated to receive data about the object.

(16) FIG. 1B illustrates an example of the optical system 10 suitable for using with the technique of the present invention for lensless imaging of an object 50. As shown in the figure, the detector 70 may be connected to a system for phase reconstruction of 2D images 100 according to the present invention. However, it should be noted that the data collected by the detector may be stored in a local or remote storage utility and later processed by the system of the present invention. The example of FIG. 1B is almost similar to the optical system of FIG. 1A but includes a controllable aperture 28 (in FIG. 1B) configured to enable collection of first and second independent diffractive patterns being indicative of part of the object 50. It should however be noted that the diffractive patterns may be collected utilizing a first and second illumination channels based on two different light sources and or collected using two separate but similar samples, e.g. two molecules. The third measurement providing the third diffraction pattern may include illumination of both parts of the sample (in case the two independent measurements are associated with parts of the sample) and/or the use of the two illumination channels used, and/or illumination of the two replicas of the sample together (e.g. one replica on the left and one on the right). It should be noted that the geometrical resolution of the detector 70 defines a maximal resolution possible in the reconstruction of the image data. As will be described below, the geometrical resolution of the detector 70 may be defined as N.sub.kx,N.sub.ky describing the number of pixels in each row and column respectively.

(17) FIG. 1C illustrates one other example of the optical system 10 suitable for using with the technique of the present invention for lensless imaging. In this example, the system is configured for lensless imaging of two or more objects in a single shot (two such objects are shown 50a and 50b). According to some embodiments of the invention, a diffraction response from two or more objects being separated between them a distance l. The two or more objects, 50a and 50b in this non-limiting example, are of finite sizes that is preferably smaller than the distance l between them. The single-shot diffractive measurement is capable of providing accurate imaging of biological samples even under X-ray or electron scattering, which may generally destroy the sample and thus cannot be used for repetitive imaging. Thus, as will be described further below, the present technique may be used for phase reconstruction of a single-shot diffractive pattern generated from two or more well separated objects.

(18) As noted, the at least three measurements include a first measurement using a first illumination configuration, a second measurement using a second illumination configuration, and a third measurement using a combined illumination configuration. In some embodiments as described in FIG. 1C, the at least three measurements correspond to diffractive patterns which may be extracted from the measured diffractive data as will be described further below, thus providing sufficient data from phase reconstruction in a single shot scattering measurement. This is while according to some other embodiments as described in FIGS. 1A and 1B, the illumination configurations correspond to the use of additional illumination channel, and/or illumination of part of the object 50 (as exemplified in FIG. 1B) and/or measurements of two replicas of the object. FIGS. 2A to 2C illustrate an example of the three configurations of the controllable aperture 28 shown in FIG. 1B. Generally, in order to provide sufficiently independent measurements, the controllable aperture is preferably located at a conjugated optical plane with respect to the sample, i.e. attached to the sample or in very close proximity thereto, or in an optical plane corresponding to image-object planes. As shown, a first illumination pattern includes light passage through one half 28A of the aperture, the second illumination pattern includes light passage through a second half 28B of the aperture and the third illumination pattern includes light passage through both parts of the aperture 28A+B.

(19) According to some embodiments of the present invention, system 100 for phase reconstruction of 2D images 100 may be connectable to the one or more light sources and/or the controllable aperture 28 to provide control commands for opening or closing the corresponding illumination channels. Additionally, the system 100 may include a control unit configured to generate such control commands to thereby generate the required data indicative of diffractive patterns.

(20) It should be noted that in order to enhance understanding and to simplify notations in the following description, the three measurements will be referred to herein below as I.sub.1=|E.sub.1|.sup.2 for a first measurement, I.sub.2=|E.sub.2|.sup.2 for a second measurement and I.sub.3=|E.sub.1+E.sub.2|.sup.2 for the third measurement. As indicated above, the third measurement may preferably be a combined measurement where the measured diffractive pattern includes interference of the two illumination configurations. It should also be noted that these notations apply for variation of optical path as well as to illuminating parts of the object or blocking a portion of the light as shown in FIG. 1B, or to illumination of two replicas of a similar object, and is generally used regardless of the actual measurement technique used. More specifically, the intensity fields of the first and second independent measurements are referred to as I.sub.1 and I.sub.2 and the intensity field of the third measurement is referred to as I.sub.3.

(21) Thus, the technique of the present invention provides for phase reconstruction of input data indicative of diffractive patterns of wave scattered from a sample to be inspected. In this connection reference is made to FIG. 3 illustrating a system for reconstruction of two-dimensional phase 100 in the form of a block diagram. The system 100 is generally a computerized system including one or more data processing utilities, storage utility and suitable input and output utilities. The main processing utility includes a two-dimensional phase reconstruction utility 110 according to the present invention, and may also include an inverse Fourier transform calculation module 200 configured to use the reconstructed phase information to obtain data about the inspected object. Generally, the system 100 receives input data indicative of at least three diffractive measurements as indicated above from a detector unit 70 and a corresponding image data generating unit 75 associated with the optical setup for inspection. It should however be noted that the data may be received from any type of storing utility or communication utility, in accordance with prior measurement.

(22) The two-dimensional phase reconstruction utility 110 may generally include various computational modules for phase reconstruction. As shown in FIG. 3, the 2D phase reconstruction utility 110 includes a pre-processing module 120, a vector generating module (VGM) 130, a one-dimensional vectorial phase retrieval module (VPR) 140 and a two-dimensional phase tailoring module (2DPT) 150. The pre-processing module 120 is generally configured and operable to determine if the input data is appropriate for phase reconstruction and/or contains operation commands for the system. Additionally, the pre-processing module 120 may operate to calculate required intermediate data based on input data indicative of measurements I.sub.1, I.sub.2 and I.sub.3, the intermediate data is generally associated with relative phase between measurements of two different illumination channels as described above. After verification and intermediate calculation, the pre-processing module 120 transmits the data to be calculated to the vector generating module 130 (VGM) which operates to separate the input data to plurality of one-dimensional vectors to enable individual processing of each of the 1D vectors. The VGM 130 actually operates to shift the phase reconstruction problem form a single two-dimensional problem into a set of one-dimensional phase problems, where each of the 1D vectors corresponds to a line (or a column) of the original 2D diffractive patterns. The VGM 130 then operates to transmit vector data corresponding to each line/column of the input data (indicative of three 2D diffractive patterns) to the one-dimensional vectorial phase retrieval module (VPR) 140 for performing 1D phase reconstruction. Generally the VPR module 140 may utilize any known phase reconstruction technique capable of calculating phase information based on one-dimensional data, for example the VPR module 140 may utilize Vectorial Phase Retrieval (VPR) technique developed by the inventors of the present invention which will be described in more details below.

(23) After the VPR module 140 successfully reconstructs the phase for each of the 1D vectors, the reconstructed phase vectors as well as the input data are transmitted to the two-dimensional phase tailoring module (2DPT) 150 for tailoring back to two-dimensional diffractive pattern with the reconstructed phase information. The resulting diffractive pattern including the reconstructed phase information may be stored or transferred for further analysis and/or transmitted to the Fourier transform calculation module 200 which may operate to calculate inverse Fourier transform of the diffractive pattern to provide data about the structure of the object.

(24) Generally, the image reconstruction system 100 may provide output in the form of object data 250 being indicative of image data of the sample object. However, according to some embodiments the image reconstruction system 100 may output data indicative of the diffractive measurements including reconstructed phase information to thereby enable calculation of inverse Fourier transform by a separate system of analysis of the Fourier data of the sample.

(25) Reference is made to FIG. 4 illustrating a flow chart exemplifying the technique of the present invention as used in phase reconstruction of two-dimensional objects. As shown, the technique utilizes input data (step 1010) including data indicative of the diffractive measurements. Such input data may also include meta data such as data about geometrical resolution in which the diffractive measurements were taken. The input data may generally undergo pre-processing (step 1020) to identify errors in the data or control commands and possibly to provide intermediate data such as relative phase data, which will be described further below. After pre-processing of the input data, the at least three diffractive measurements are separated to slices, i.e. each two-dimensional data matrix is divided to plurality of one-dimensional vectors, lines or columns of the matrix (step 1030). The one-dimensional vectors thus include sets of three (generally at least three) vectors of a certain line/column, one of each of the diffractive measurements. Each of these sets is then processed together (step 1040) using Vectorial Phase Retrieval (VPR) technique developed by the inventors of the present invention. This processing phase provides phase reconstruction for a plurality of one-dimensional vectors, while each includes certain linear phase ambiguity. These plurality of vectors are tailored together by identifying the correct linear phase for each of the vectors to provide two-dimensional diffractive measurement data with reconstructed phase information (step 1050). After phase reconstruction is complete, an inverse Fourier transform of the diffractive measurement data can be calculated (step 1060) to provide object's data. It should be noted that the system 100 of FIG. 3 may be any type of computerized system configured and operable to perform the calculation steps as described in FIG. 4.

(26) The phase reconstruction technique of the present invention is exemplified in FIG. 5 illustrating collected diffractive patterns 1100, 1200 and 1300, one dimensional slices of the patterns used for vectorial phase reconstruction 1400 and 1500, semi-reconstructed phase maps 1600 and 1700 and fully reconstructed image data 1800. The three diffractive patterns 1100, 1200 and 1300 may be collected as described above, such that two patterns are independent and the third is an interference of the first and second, and are provided as input data. The technique may then operate along either the horizontal or vertical routes, where the input diffractive patterns are sliced to generate sets of one-dimensional vectors 1400 for the vertical route or 1500 for the horizontal route and are used to determine the phase data along each slice. It should be noted that in order to simplify the figure, only two slices are shown here, the present technique however utilizes corresponding slices from all of the diffractive patterns provided in the input data, and preferably utilizes the relative phase map rather than interference diffractive pattern. The phase information is used to determine semi reconstructed patterns 1600 or 1700, where the input data is Fourier transformed along the selected vertical or horizontal axis. The phase shifts of each row or column may then be determined to tailor the slices together and provide the fully reconstructed image data 1800. It should be noted that the technique may utilize any one of the horizontal or vertical paths to determine the reconstructed phase information and provide the corresponding image data.

(27) As indicated above, the technique of the present invention enables 2D phase reconstruction of diffractive measurements. According to some embodiments the technique utilizes input data including three diffractive measurements/patterns such as I.sub.1, I.sub.2 and I.sub.3 acquired as described above with reference to FIGS. 1A and 1B. As also indicated above, the diffractive measurements correspond to intensity of light fields provided by varying the illumination channel and can be described as follows:
I.sub.1,2=|{tilde over (E)}.sub.1,2({right arrow over (k)})|.sup.2,I.sub.3=|{tilde over (E)}.sub.3({right arrow over (k)})|.sup.2=|{tilde over (E)}.sub.1({right arrow over (k)})+{tilde over (E)}.sub.2({right arrow over (k)})|.sup.2(equation 1)

(28) where {right arrow over (k)}=(k.sub.x,k.sub.y) is a spatial frequency vector, i.e. Fourier component of the object, and generally corresponds to certain pixel of the detector. As indicated above, the geometrical resolution of the detector N.sub.kx,N.sub.ky limits the maximal resolution achievable by reconstruction (as dictated by the Nyquist theorem). The technique of the present invention may require certain over-sampling such that maximal resolution achievable for the reconstructed image data may be n.sub.x,n.sub.y, while N.sub.kx=4n.sub.x and N.sub.ky=2n.sub.y.

(29) The input data may be pre-processed by pre-processing module 120 to calculate intermediate data in the form of a cross term indicative of a relative phase between the two optical fields E.sub.1 and E.sub.2, i.e. to calculate the term {tilde over (E)}.sub.1 ({right arrow over (k)}){tilde over (E)}.sub.2*({right arrow over (k)}). For example the pre-processing module may perform calculation as follows:

(30) e [ E ~ 1 ( k .fwdarw. ) E ~ 2 * ( k .fwdarw. ) ] = 1 2 [ | E ~ 3 ( k .fwdarw. ) | 2 - | E ~ 1 ( k .fwdarw. ) | 2 - | E ~ 2 ( k .fwdarw. ) | 2 ] ( equation 2 ) R ( x , k y ) = x - 1 ( e [ E ~ 1 ( k .fwdarw. ) E ~ 2 * ( k .fwdarw. ) ] ) ( equation 3 ) I ( x , k y ) = { iR ( x , k y ) x 2 n x = 1 2 N kx ( iR ( 4 n x - x , k y ) ) * x > 2 n x = 1 2 N kx ( equation 4 ) E ~ 1 ( k .fwdarw. ) E ~ 2 * ( k .fwdarw. ) = X [ R ( x , k y ) + i I ( x , k y ) ] ( equation 5 )

(31) where custom charactere indicates that equation 1 calculates the real part of the interference term; R(x,k.sub.y) and I(x,k.sub.y) are respectively real and imaginary parts corresponding to intermediate calculations; custom character.sub.X and custom character.sub.x.sup.1 are one-dimensional Fourier and inverse Fourier operators along the x axis, i is the imaginary unit (i.e. i.sup.2=1) and k.sub.x,y stand for coordinates (pixels) in the detector plane and x,y stand for coordinates (pixels) of the reconstructed image data. As noted above the input data may generally be in the form of three matrices each having 4n.sub.x by 2n.sub.y pixels.

(32) With the relative phase term calculated, the input data including data pieces indicative of the first and second diffractive patterns, together with data about the relative phase is transmitted to the Vector Generating Module (VGM) 130. The VGM 130 is configured and operable to generate a set of one-dimensional vectors, each corresponding to a line of the detected intensity field (and the associated line of the intermediate term). The VGM transmits the one-dimensional vectors to the VPR one dimensional phase reconstruction module 140 for line-by-line phase reconstruction. The 1D vectors may generally include the following terms:
|{tilde over (E)}.sub.1(k.sub.x,k.sub.y=m)|.sup.2,|{right arrow over (E)}.sub.2(k.sub.x,k.sub.y=m)|.sup.2, and {tilde over (E)}.sub.1(k.sub.x,k.sub.y=m){tilde over (E)}.sub.2*(k.sub.x,k.sub.y=m)

(33) The VPR module 140 is configured and operable for phase reconstruction of one-dimensional input data based on at least three vectors corresponding to first and second independent measurements and a third measurement being indicative of interferences between the first and second measurements. The VPR module operates to reconstruct the phase information up to an ambiguous linear phase for each of set of at least three 1D vectors. Thus, the VPR module calculates the unknown phases X.sub.1,2(k) defined by E.sub.1,2({right arrow over (k)})=|E.sub.1,2({right arrow over (k)})|X.sub.1,2({right arrow over (k)}) for {right arrow over (k)}=(k.sub.x,k.sub.y=m).

(34) More specifically, for each set of 1D input vectors, the VPR module 140 may operate to determine the reconstructed phase data utilizing matrix calculations. For example, the VPR module 140 may utilize a matrix A.sub.m of size 8n.sub.x8n.sub.x (i.e. 2N.sub.kx2N.sub.kx) of the form:

(35) ( A m ) k , l = { | E 1 ( l , m ) | e i l ( k + 2 n x ) 2 4 n x k 2 n x , l 2 n x | E 2 ( l - 2 n x , m ) | e i l ( k + 2 n x ) 2 4 n x 4 n x k > 2 n x , 4 n x k > 2 n x E 1 ( l - 4 n x , m ) E 2 * ( l - 2 n x , m ) l > 4 n x , k = l - 4 n x | E 1 ( l - 4 n x , m ) || E 2 ( l - 4 n x , m ) | l > 4 n x , k = l 0 otherwise ( equation 6 )

(36) and may operate to recover the unknown phases X.sub.1,2(k.sub.x,k.sub.y=m) by solving a linear problem of the form
A.sub.mX.sub.m=0(equation 7)

(37) where X.sub.m=[X.sub.1(k.sub.x,k.sub.y=m), X.sub.2 (k.sub.x,k.sub.y=m)] is a 1D vector of length 8n.sub.x (i.e. 2N.sub.kx).

(38) When the VPR module 140 completes the phase reconstruction of all of the sets of 1D vectors, it provides corresponding sets of 1D phase-vectors each corresponding to phase information (up to an unknown linear phase) of each row of the input data. The set of phase-vectors is then transmitted to the Two-Dimensional Phase Tailoring Module (TDPT module) 150 which is configured and operable to combine all of the phase-vectors into a matrix of two-dimensional phase data.

(39) To this end the TDPT module 150 is configured to determine a phase shift associated with each of the 1D phase-vectors determined by the VPR module to thereby combine the phase shifts to reconstruct phase variations along the perpendicular axis.

(40) Generally, the TDPT module 150 may utilize knowledge of the illumination channel and/or aperture providing illumination of the object, i.e. known support of reconstructed data. This may be determined by solution of a set of linear equations of the form:
.sub.m|E.sub.1,2(k.sub.x,m)|X.sub.1,2(m)(m)e.sup.imy=0(equation 8)

(41) for all values of y outside the illumination region of the illumination channel/aperture. Here (m) is the phase shift of the corresponding 1D phase-vector X.sub.1,2 (k.sub.x,m) and the summation is over all values of m=k.sub.y. It should be noted that equation 8 can be presented as M=0 thereby enabling to determine the phase shift by solving the linear problem.

(42) It should be noted that although objects of finite size are defined as having a compact support, a typical object being inspected may be of unknown size and thus the support may be unknown. In such cases where the support size is unknown, the technique of the present invention may utilize scanning the possible supports size to determine the phase vectors X.sub.m and (m). If the assumed support size is too small/large, the determined phase vector will be incompatible with the required image data providing phase residue outside of the assumed support. To this end the system may include a support estimation module being configured to provide estimation of the support size to the VPR module 140 and the TDPT module 150 and to analyze the determined phase vector with respect to the determined inverse Fourier transform of the diffractive data (being one- or two-dimensional). The support estimation module can thus determine the correct support size in accordance with the phase residue such that minimal phase residue indicates that the corresponding assumed support size is correct. It should be noted that the support size is generally determined separately for the VPR module 140 and the TDPT module 150 to thereby provide independent support size along the vertical and horizontal axes of the object 50.

(43) In practice, determining a solution for equations 7 and 8 may be done by selecting a column of the matrix (being A.sub.m or M) having maximal norm, removing the index corresponding to the selected column from the associated vector (X or ), and minimizing the difference between the remaining part of the matrix multiplied by the remaining part of the vector and the selected vector. For example, to determine the vector which minimizes the expression:
M__M.sub.1.sup.2(equation 9)

(44) where M.sub.1 is the selected column having maximal norm, M_ is the remaining matrix without the selected column, and _ is the remaining vector without the index corresponding to the selected column. It should also be noted that the support size may be determined (if unknown), utilizing the minimizer _ indicated in equation 9. As indicated above, for incorrect support size, the minimizer _ after being normalized to provide phase data may generate a phase residue outside the assumed support size. Thus the corresponding modules may utilize various assumed support sizes to minimize the phase residue and thus determine the correct size of the object 50.

(45) Reference is made to FIGS. 6A-6E showing an additional set of input and reconstructed data provided by the technique of the invention. FIGS. 6A-6C show respectively diffractive patterns as collected from an object with a first illumination channel, second illumination channel and a combined channel as described above. The input data have been processed in accordance with the present invention as described above to reconstruct image data indicative of the object. FIG. 6D shows the reconstructed image data determined by the technique of the present invention. The reconstructed image data can be compared to the original image shown in FIG. 6E.

(46) Additionally, as indicated above, with reference to FIG. 1C, the technique of the present invention may also be used for phase reconstruction based on a single-shot diffractive measurement of two or more objects. FIGS. 7A-7E and 8A-8D show phase reconstruction of a single shot of two (FIGS. 7A-7E) and three (FIGS. 8A-8D) objects according to the present technique.

(47) The phase reconstruction of a single-shot diffractive measurement takes advantage of the inventors' understanding that the Fourier transform of the diffractive intensity pattern is indicative of the autocorrelation of the imaged object. When two or more spatially separated objects are being imaged simultaneously, the inverse Fourier transform of the diffractive pattern includes the sum of the objects autocorrelations a spatially distinct cross-correlation of them. This allows for extraction of the required information |{tilde over (E)}.sub.1(k.sub.x,k.sub.y=m)|.sup.2, |{right arrow over (E)}.sub.2(k.sub.x,k.sub.y=m)|.sup.2, and {tilde over (E)}.sub.1(k.sub.x,k.sub.y=m){tilde over (E)}.sub.2*(k.sub.x,k.sub.y=m) corresponding to individual object diffractive response and interference relation between the objects as described above.

(48) In this connection, denoting the two objects A({right arrow over (x)}), B({right arrow over (x)}) (and possibly C({right arrow over (x)}) when three objects are used, and so on for more objects) and their corresponding spectra (Fourier transform/diffractive response) as ({right arrow over (k)}), {tilde over (B)}({right arrow over (k)}), the diffractive response of both objects being separated a distance l is:
F[A({right arrow over (x)})+B({right arrow over (x)}+{right arrow over (l)})]={tilde over (A)}({right arrow over (k)})+{tilde over (B)}({right arrow over (k)}).Math.e.sup.i{right arrow over (l)}{right arrow over (k)}(equation 10)
where F stands for the Fourier transform.

(49) The actual measurement provides only the intensity information thus requiring the phase reconstruction. However, the inverse Fourier transform of the measured intensity of the diffractive pattern can be expressed in terms of single object auto-correlation and cross-correlation between the objects:
IF[|({right arrow over (k)})+{tilde over (B)}({right arrow over (k)}).Math.e.sup.i{right arrow over (l)}{right arrow over (k)}|.sup.2]=A({right arrow over (x)}).star-solid.A({right arrow over (x)})+B({right arrow over (x)}).star-solid.B({right arrow over (x)})++A({right arrow over (x)}).star-solid.B({right arrow over (x)}+{right arrow over (l)})+B({right arrow over (x)}+{right arrow over (l)}).star-solid.A({right arrow over (x)})(equation 11)
here .star-solid. is the 2D cross-correlation operator, i.e. f.star-solid.g({right arrow over (d)})=f*({right arrow over (x)})g({right arrow over (x)}+{right arrow over (d)})d{right arrow over (x)}, where f* denoted the complex conjugate of f. As the two or more objects are finite, i.e. A({right arrow over (x)}) and B({right arrow over (x)}) have limited (compact) support, and the distance between them l is larger than the size of the objects, the cross correlation terms are separated from the autocorrelation terms as shown in FIG. 7C. This allows separation between the cross correlation and auto-correlation terms to determine the required intensity patterns. A Fourier transform of the auto-correlation term of the two objects provides:
F[A({right arrow over (x)}).star-solid.A({right arrow over (x)})+B({right arrow over (x)}).star-solid.B({right arrow over (x)})]=|{tilde over (A)}({right arrow over (k)})|.sup.2+|{tilde over (B)}({right arrow over (k)})|.sup.2(equation 12)

(50) This is while Fourier transforms of the different cross-correlation terms provide:
F[A({right arrow over (x)}).star-solid.B({right arrow over (x)}+{right arrow over (l)})]=e.sup.i{right arrow over (l)}.Math.{right arrow over (k)}*({right arrow over (k)}){tilde over (B)}({right arrow over (k)})
F[B({right arrow over (x)}+{right arrow over (l)}).star-solid.A({right arrow over (x)})]=e.sup.i{right arrow over (l)}.Math.{right arrow over (k)}({right arrow over (k)}){tilde over (B)}*({right arrow over (k)}) and thus also
(e.sup.i{right arrow over (l)}.Math.{right arrow over (k)}({right arrow over (k)}){tilde over (B)}*({right arrow over (k)}))(e.sup.i{right arrow over (l)}.Math.{right arrow over (k)}*({right arrow over (k)}){tilde over (B)}({right arrow over (k)}))=|{tilde over (A)}({right arrow over (k)})|.sup.2|{tilde over (B)}({right arrow over (k)})|.sup.2(equation 13)

(51) Equations 12 and 13 provide data on the sum and product of |({right arrow over (k)})|.sup.2 and |{tilde over (B)}({right arrow over (k)})|.sup.2. Based on the sum and product of the intensities, the information about separate intensity data for |({right arrow over (k)})|.sup.2 and |{tilde over (B)}({right arrow over (k)})|.sup.2 can be determined, but with certain ambiguity regarding which value corresponds to |({right arrow over (k)})|.sup.2 and which to |{tilde over (B)}({right arrow over (k)})|.sup.2. To this end a difference value for each pixel (each k value) can be defined and utilizing continuity of the difference value, the problem may thus be converted to an associated sign problem, which can be solved by defining regions of similar sign and borders between them. Limitation of the number of equations for the sign problem may also utilize data about the compact support of the objects, i.e. Fourier transform of the difference values is zero outside of the compact support regions.

(52) It should be noted that the nave solution to the sign problem would require that the Fourier transform of |Diff({right arrow over (k)})|Sign({right arrow over (k)}) at any point {right arrow over (x)} outside of the compact support of any of the objects is zero. This requirement has a-priory a higher number of unknowns with respect to the number of equations. However, the inventors' understanding provides for greatly reducing the number of unknowns. To this end, the sign between two neighboring points cannot change unless there is a zero between the points. Thus, to determine the sign value of Diff(k), the technique of the invention may utilize separation of Diff(k) into plurality of sign-regions, regions of similar sign separated by boundaries where the sign of Diff(k) is unknown. This technique may greatly reduce the number of unknowns in the sign problem, often by an order of magnitude, i.e. a factor of 10, or more.

(53) Thus the information about finiteness of the two or more objects (compact support thereof) and the separation between them allows determination of the individual values of |({right arrow over (k)})|.sup.2 and |{tilde over (B)}({right arrow over (k)})|.sup.2, from which, according to the description above, phase can be retrieved by the VPR method. As indicated above, the phase reconstruction technique utilizes information about individual diffractive response, i.e. |({right arrow over (k)})|.sup.2 and |{tilde over (B)}({right arrow over (k)})|.sup.2, and data about interference relation between the objects (or as described above between the illumination paths), i.e. ({right arrow over (k)}){tilde over (B)}*({right arrow over (k)}).

(54) Thus, the present invention provides a novel technique for use in phase reconstruction of two-dimensional diffractive data. It should also be understood that the system according to the invention may be a suitably programmed computer. As indicated above, the technique may be implemented by a computerized system, or may be embedded therein, being configured to receive input data from a measurement system and/or a remote storage utility. Likewise, the invention contemplates a computer program being readable by a computer for executing the method of the invention. The invention further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the method of the invention. Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention as hereinbefore described without departing from its scope defined in and by the appended claims.