SPECTRAL IMAGING METHOD AND SYSTEM

20180252583 ยท 2018-09-06

    Inventors

    Cpc classification

    International classification

    Abstract

    An imaging system and method are presents for use in reconstructing spectral data of an object. The imaging system comprises: an optical unit; a pixel array of a detector; and a data processor for receiving and processing image data indicative of light detected by the pixel array and generating reconstructed spectral data of the object being imaged. The optical unit is configured and operable for applying a predetermined coding to an input light field while creating an optical image thereof on a detection plane defined by the pixel array. Therefore, the image data is a function of the predetermined coding and a spectrum of the object to be determined.

    Claims

    1. An imaging system for use in reconstructing spectral data of an object being imaged, the imaging system comprising: an optical unit; a detector having a pixel array configured and operable to detect light from the optical unit and generate image data indicative of the detected light; and a data processor configured and operable to receive and process the image data indicative of the light detected by the pixel array and generating reconstructed spectral data of the object being imaged by the optical unit; wherein the optical unit is configured and operable for applying predetermined angular coding to an input light field while creating an optical image thereof on an imaging plane defined by the pixel array, such that the image data is a function of a predetermined angular code and a spectrum of the object to be determined.

    2. The system of claim 1, wherein the optical unit comprises a coder assembly configured for applying said predetermined angular coding, and an imaging lens module, the imaging plane defined by the pixel array being located in the back focal plane of the lens module.

    3. The system of claim 2, wherein the coder assembly is accommodated in the optical path of the input light between the lens module and the imaging plane.

    4. The system of claim 2, wherein the coder assembly is accommodated upstream of the lens module with respect to input light propagation direction.

    5. The system of claim 2, wherein the coder assembly is operable in transmission or reflection mode.

    6. The system of claim 1, wherein the detector is a monochromatic or color detector.

    7. The system of claim 1, wherein the angular coding applied to the input light field is defined by an effective spectral transmission function of the optical unit.

    8. The system of claim 7, further comprising a controller configured and operable to sequentially modify the effective spectral transmission function of the optical unit, to modify the angular coding applied to the input light field, the image data being thereby indicative of sequentially acquired frames with different angular coding of the input light field.

    9. (canceled)

    10. The system of claim 2, wherein the coder assembly comprises a dispersive unit, a given relative orientation of said lens module and a dispersive pattern of said dispersive unit defining given effective spectral transmission function of the optical unit, the predetermined angular code being defined by the effective spectral transmission function.

    11. The system of claim 10, comprising a controller associated with at least one of the dispersive unit and the lens module and configured and operable for modifying the effective spectral transmission function of the optical unit by carrying out at least one of the following: affecting the dispersive pattern of a tunable dispersive unit, affecting an angular position of the dispersive pattern with respect to an optical axis of the optical unit, varying a focal length of the lens module, and displacing the lens module.

    12-14. (canceled)

    15. The imaging system of claim 1, wherein the optical unit is configured for imaging the object on at least N pixels of the pixel array, thereby allowing reconstruction of N spectral bands of the object being imaged.

    16. The imaging system of claim 15, wherein the optical unit includes a dispersive unit and a lens module including one or more lenses, a given relative orientation of said lens module and a dispersive pattern of said dispersive unit defining an effective spectral transmittance of the optical unit, the predetermined angular code being defined by the effective spectral transmission function.

    17. The imaging system of claim 10, wherein the dispersive unit comprises an etalon.

    18. The imaging system of claim 10, wherein the dispersive unit is tunable enabling controllable variation of the dispersive pattern thereof.

    19. The imaging system of claim 10, wherein the dispersive unit comprises a dispersive element having the predetermined dispersive pattern.

    20. The imaging system of claim 7, wherein the angular coding applied by the optical unit provides angular multiplexing of image data at the pixel array, such that the detected light intensity at the pixel corresponds to the spectral data of the image multiplexed with the effective transmittance function of the optical unit.

    21-22. (canceled)

    23. The imaging system of claim 7, wherein the processing unit is configured and operable to pre-process the image data corresponding to image of a region of interest within an acquired frame and identify the object whose spectral data is to be reconstructed, and utilize the effective spectral transmission function corresponding to acquisition of said frame for processing the image data of the identified object and reconstructing the object's spectrum.

    24. The imaging system of claim 23, wherein said pre-processing comprises applying at least one pattern recognition algorithm to the image data from the detector to identify the object having a substantially uniform spectral content.

    25. A system for use in reconstructing spectral data of an object, the system comprising: an optical unit an imaging lens module and a dispersive unit, a given relative orientation of said lens module and a dispersive pattern of said dispersive unit defining given effective spectral transmission function of the optical unit, the predetermined angular code being defined by the effective spectral transmission function, such that said optical unit applies an angular coding, defined by the effective spectral transmission function, to an input light field while being imaged onto an imaging plane defined by a pixel array of a detector located in a back focal plane of the imaging lens module; a controller configured and operable to sequentially modify the effective spectral transmission function of the optical unit, to modify the angular coding applied to the input light field, such that image data generated by the detector is indicative of sequentially acquired frames with different angular coding of the input light field; and a data processor configured and operable to process the image data utilizing data indicative of the different angular coding applied in said sequentially acquired frames and generating reconstructed spectral data of the object being imaged.

    26. A method for use in reconstructing spectral data of an object being imaged, the method comprising: performing one or more optical imaging sessions, each optical imaging session comprising creating an image of an input light field originated in a region of interest in an imaging plane defined by a pixel array while applying to said input light field being imaged a predetermined angular coding, such that detected light intensity at the pixel corresponds to spectral data in the image created during the imaging session multiplexed with the predetermined angular coding; and processing data indicative of the detected light intensity in each of said one or more imaging sessions, utilizing the predetermined angular coding, and determining the spectral data of the object.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0044] In order to better understand the subject matter that is disclosed herein and to exemplify how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:

    [0045] FIG. 1 is a schematic illustration of a sensing system utilizing an imaging system of the invention;

    [0046] FIGS. 2A and 2B more specifically illustrate two non-limiting examples, respectively of the imaging system of the invention;

    [0047] FIGS. 2C and 2D show schematically the light propagation scheme in the imaging system configurations of FIGS. 2A and 2B according to some embodiments of the invention;

    [0048] FIG. 3 illustrates a flow diagram of an example of the method of the invention for operating the imaging system and processing the image data for reconstructing the spectral data of an object/scene being imaged;

    [0049] FIGS. 4A and 4B show simulation results obtained by the inventors, where FIG. 4A shows a cropped RGB image with marked regions of interest, and FIG. 4B shows original and reconstructed spectra for each of the regions of interest;

    [0050] FIG. 5 shows schematically the light propagation scheme in the imaging system configuration according to some other embodiments of the invention; and

    [0051] FIGS. 6 and 7 exemplify the operational scheme of the present invention in the embodiments of FIG. 5.

    DETAILED DESCRIPTION OF EMBODIMENTS

    [0052] Referring to FIG. 1, there is schematically illustrated, by way of a block diagram, a sensing system (detection system) 10 utilizing an imaging system 100 of the invention. The imaging system 100 includes an optical unit 102 for locating in front of a light sensitive surface (detection/image plane) defined by a pixel array unit 104 of the sensing system, and a data processor unit 106 configured for data communication with a readout circuit of the pixel array unit 104 for receiving image data therefrom. The data processing unit 106 may be integral with the pixel array unit 104, e.g. may be a software module of the readout circuit of the pixel array unit 104.

    [0053] The optical unit 102 is configured and operable for applying predetermined coding to an input light field while creating an optical image thereof on an imaging plane defined by the pixel array unit 104. The image data, corresponding to the detected light signal, is a function of the predetermined coding and a spectrum of the object (i.e. of the input light field). The predetermined coding is defined by an effective transmittance function (ETF) of the optical unit used in the respective image acquisition session (frame).

    [0054] The optical unit 102 includes a coder assembly configured for applying predetermined coding, and an imaging lens module. The imaging plane is located in the back focal plane of the lens module. The coder assembly may be accommodated in the optical path of the input light between the lens module and the detection plane; or may be accommodated upstream of the lens module and operable in either transmission or reflection mode. The detector may be monochromatic or color detector, as described above. The coding applied to the input light field is defined by ETF of the optical unit. The ETF is a function of wavelength, and in some embodiments is also a function of angle of light propagation and in some other embodiments a function of exposure time.

    [0055] The processor unit 106 includes inter alia data input/output utilities (not shown), a memory module 106A (e.g. for storing the ETF of the optical unit being used), and an analyzer module 106B adapted for analyzing the image data from the pixel array unit 104 using the ETF data of the optical unit for the corresponding image frame, and determining the spectra of the object. This will be described more specifically further below.

    [0056] In some embodiments of the invention, the processor unit 106 also includes a controller 106C associated with the optical unit 102 for managing the controllable variation of the ETF of the optical unit 102. As indicated above, and will be described more specifically further below, the ETF of the optical unit 102 may be variable. In some embodiments, the controller 106C is also associated with a shutter of the pixel array unit for controlling (or receiving data indicative of) the exposure time pattern.

    [0057] As described above, in some embodiments of the invention, the optical unit 102 is configured and operable for applying angular coding to the input light field while creating an optical image thereof on a detection plane, i.e. light sensitive surface of pixel array unit 104. The angular coding applied by the optical unit 102 is defined by the (ETF) of the optical unit, which is a function of both light propagation angle and wavelength.

    [0058] Typically, in an imaging system, each pixel measures the overall intensity of light rays incident on said pixel at different angles (the light ray bundle between the two marginal rays). The optical unit 102 including an angular coding imaging assembly provides that each light ray of the input light L.sub.in impinges on the optical unit at a slightly different angle and corresponding output light L.sub.out has a slightly modified transmission spectra. Thus, each pixel in the pixel array unit 104 measures the integrated intensity of multiple weighted modified spectra. As the ETF of the optical unit 102, for the given image frame being acquired, is fixed and known, the only variable affecting the detected intensity is the spectra of the object. Assuming that adjacent pixels share the same spectra, a reconstruction algorithm could be applied to recover the spectra of the object.

    [0059] More specifically, in such embodiments, the optical unit 102 includes a dispersive unit/element (constituting an angular coding unit) and an imaging lens module (constituting an imaging unit). The ETF of such optical unit 102 is thus defined by spectral transmittance of the dispersive unit and angular properties of the lens module. As will be described more specifically further below, the ETF of the angular coding based optical unit 102 may be varied by applying at least one of the following: changing the dispersive pattern of the tunable dispersive element, varying the focal length of the lens module, and affecting an angular position of the entire dispersive element with respect to the optical axis.

    [0060] Reference is made to FIGS. 2A and 2B showing schematically two specific but not limiting examples of the configuration of the optical unit 102. To facilitate understanding, the same reference numbers are used for identifying components that are common in all the examples. As shown, the optical unit 102 includes a coder unit (e.g. dispersive unit/element) 102A and a lens module 102B (single imaging lens in the present example). The detection plane DP defined by the pixel array unit 104 is located in the back focal plane of the lens module 102B.

    [0061] In the example of FIG. 2A, the coder unit 102A is located between the lens 102B and the detection plane DP, and in the example of FIG. 2B, the coder unit 102A is located in front of the lens module 102B.

    [0062] Considering the angular coding based optical unit, it may include a dispersive unit may including a dispersive element of any known suitable configuration being either active (tunable dispersive pattern) or passive (fixed dispersive pattern). For example, an etalon can be used as a dispersive element. In the simulations performed by the inventors, an air spaced Fabry-Perot etalon was used as the dispersive element, where transmitted spectra varies with the incidence angle.

    [0063] Reference is made to FIGS. 2C and 2D exemplifying schematically the light propagation through the imaging system of the configurations of FIGS. 2A and 2B, respectively, during the frame acquisition, for the angular coding based optical unit. As shown in FIG. 2C, each ray of input light field L.sub.in impinges the dispersive element 102A at a different angle, which modifies its transmitted spectrum. The transmission spectra variations across different angles present a known characteristic of the dispersive element. Each pixel thus measures a weighted sum of the rays, each of them with a different spectrum. The coefficients of the weighted sum are known and are a property of the optical design. By applying an image processing algorithm for image segmentation followed by spectral decomposition algorithm, a hyperspectral cube can be reconstructed.

    [0064] More specifically, per pixel, the overall acquired intensity could be described by:

    [00001] I = .Math. .Math. .Math. .Math. w .Math. T ( ) .Math. R ( 1 )

    where I denotes the total acquired signal, w.sub. denotes the weight of rays with angle , T.sub.() denotes the spectral transmittance of etalon at angle , and R.sub. denotes the objects' reflectance spectra. Close to the optical axis, the light spot is approximately circular, so the angular weight function takes the form:

    [00002] w = 8 ( h - l ) 2 .Math. { 0 < l , > h 1 2 .Math. - CRA .Math. - CRA .Math. otherwise ( 2 )

    where .sub.1 and .sub.h denote the lower and upper marginal rays, respectively and CRA denotes chief ray angle.

    [0065] Then, the generalized multiplexed spectral transmittance is given by:

    [00003] T CRA ( ) = .Math. w .Math. T ( ) ( 3 )

    [0066] Plugging eq. (3) in eq. (1) yields:

    [00004] I = .Math. .Math. .Math. T CRA ( ) .Math. R ( 4 )

    [0067] Alternatively, the dispersive element could be placed in front of the lens as shown in FIGS. 2B and 2D. In cases where the imaged object is far enough, this configuration is simplified as the weight function could be assumed to be uniform:


    w.sub.1 field of view(5)

    [0068] As shown in the simple case of FIG. 2D, a single object emits a spectrum of three wavelengths only imaged to three pixels. The dispersive element has a transmittance function that depends on both wavelength and angle. Per point in the scene, the angle is quite uniform across the lens (effectively equal to the CRA). Hence, per pixel, all rays are assumed to be in a single angle. Then, the measured intensities (pixel values) matrix is given by:

    [00005] ( I pix .Math. .Math. 1 I pix .Math. .Math. 2 I pix .Math. .Math. 3 ) = ( T 11 T 12 T 13 T 21 T 22 T 23 T 31 T 32 T 33 ) .Math. ( R ( 1 ) R ( 2 ) R ( 3 ) ) ( 6 )

    where R is the spectra of the object (three wavelength only .sub.1, .sub.2 and .sub.3); T is the transmittance matrix that depends on both wavelength and angle (for example, a standard etalon); T.sub.i,j describes the transmittance of the i-th angle of j-th wavelength; I is the measurement vector.

    [0069] Thus, R is the only unknown and could be reconstructed by:


    R=T.sup.1.Math.I.(7)

    [0070] Practical consideration requires much higher number of pixels (i.e. measurement angles). As will be described further below, in order to further increase the spectral resolution of measurements, a sequence of multiple frames can be images using different ETFs of the optical unit for different frames. Variation of the ETF may be performed by displacing or tilting the dispersive element; and/or displacing the lens (by the focus mechanism) parallel to the optical axis; and/or displacing the lens (by the OIS mechanism) perpendicular to the optical axis.

    [0071] The following is an example of the spectral reconstruction process used in the embodiments of the invention utilizing angular coding of input light field being image. In this connection, reference is made to FIGS. 1 and 3 showing the construction of the processing unit 106 and a flow diagram 200 of its operation. The processing unit 106 operates to apply spectral reconstruction procedure to the image data per frame. Such procedure includes two phases: pre-processing phase 202A and reconstruction phase 202B.

    [0072] The pre-processing phase 202A is carried out by a segmenting module of the analyzer 106B and is applied to the image data resulting from the frame acquisition. The pre-processing includes applying an image segmentation algorithm to the frame data aimed at distinguishing objects within the scene. Possible realization of such algorithms could be based on any known pattern recognition technique, as described for example in [9], incorporated herein by reference. The object (or segment) that is to be identified is that of a uniform spectral content. The latter means that, for a single, segmented, object the detected pixels' intensities differ only by a gain factor (due to spatial lighting variations) and not by spectral content. Thus, such object/segment is identified and image data from the respective pixels is processed by spectral analyzer module of the analyzer 106B to reconstruct the spectral content of the object (step 202B). The spectral analyzer module operates to apply to the image data decomposition algorithms to reconstruct a hyperspectral cube,

    [0073] Assuming that the setup described in FIG. 2B is utilized, each pixel is related to a specific CRA which depends on the distance from the optical axis (due to axial symmetry). Thus, several pixels may have equal radii and thus, equal CRAs, which is advantageous for noise handling. In this case, eq. (4) could be re-written by:

    [00006] ( T 1 ( 1 ) T 1 ( 2 ) .Math. T 1 ( N ) T 2 ( 1 ) T 2 ( 2 ) .Math. T 2 ( 1 ) .Math. .Math. .Math. T M ( 1 ) T M ( 2 ) .Math. T M ( N ) ) .Math. ( R ( 1 ) R ( 2 ) .Math. R ( N ) ) = ( I 1 I 2 .Math. I M ) ( 8 )

    where M is the number of pixels (and CRAs) associated with the object, and N is the number of spectral bands. In case M<N, a linear least mean squares is applied.

    [0074] As further shown in FIG. 3, in case the penalty in resolution is not acceptable, additional frames can be acquired with different effective spectral transmittance of the optical unit. The effective spectral transmittance can be varied by displacing the lens (by the focus mechanism) parallel to the optical axis, displacing the lens (e.g. by an optical image stabilizer (OIS) mechanism) perpendicular to the optical axis, or displacing or tilting the dispersive element. Processing of the image data for different frames allows reconstruction of different spectral segments of the object's spectrum.

    [0075] Reference is made to FIGS. 4A and 4B showing the simulation results obtained by the inventors. FIG. 4A shows a cropped RGB image taken from the hyperspectral cube (obtained from [10]), where rectangle-marks M.sub.1 and M.sub.2 correspond to the two regions of interest. FIG. 4B shows spectra S.sub.1 and S.sub.1 corresponding to the original and reconstructed spectra of region M.sub.1, and S.sub.2 and S.sub.2 corresponding to the original and reconstructed spectra of region M.sub.2.

    [0076] As indicated above, in some other embodiments of the invention, a tunable filter and rolling shutter type detector array are utilized, providing to obtain spectral information of an object by using at least two image frames obtained with timely varying ETF. In this connection, reference is made to FIG. 5 which illustrates a system 300 including a rolling shutter type detector array 304; an optical unit 102 including an imaging arrangement (e.g. lens arrangement) 102B configured for directing light coming from a scene to generate an image on the detector array, and a tunable filter 302 (constituting a coder unit) located in optical path of light input towards the detector. As described above, the filter 302 may be upstream or downstream of the imaging arrangement 102B with respect to input light propagation direction. The filter 302 may also be located between the elements of the imaging arrangement 102B as the case may be.

    [0077] The tunable filter 302 is configured to vary the ETF thereof, between two or more different transmission profiles, with a predetermined time pattern. More specifically, the tunable filter 302 may be a color filter having two or more different color transmission profiles, and switching between said transmissions profiles is performed with a predetermined time intervals. For example, the tunable filter may very its transmission between predetermined bandwidth around 700 nm to predetermined bandwidth around 500 nm, i.e. between red and green transmission. Alternatively, the filter 302 may very its transmission between red, green and blue; or between red/green/blue and white (full spectrum). It should also be noted that the transmission profile may include near IR wavelengths and/or near UV spectra. It should also be noted that the tunable filter 302 may be configured to cover a full frame region, i.e. global filtering.

    [0078] Generally, the time pattern of the ETF variation of the tunable filter 302 is configured to correspond to the exposure time of one or more rows of the rolling shutter type detector array 304. Thus, the detector array 304 generates image data in which different rows correspond to different transmission profiles (different ETFs) of the tunable filter 302.

    [0079] The system also includes a data processing unit 106 configured as described above, to receive image data from the detector 304 and determine spectral information of one or more objects in a field of view being imaged. Generally, the image data may be such that objects of interest occupy at least a predetermined number of rows of pixels. Thus, the detector 304 collects input light from different regions of the object through two or more different wavelength filtering. Based on predetermined data about spectral response of the detector elements of the detector array, and data about the time pattern and the two or more transmission profiles of the tunable filter 302, the data processing unit 106 may determine the spectral information of the object of interest.

    [0080] It should be noted that some rows in the image data may result from the exposure to two or more filtering profiles, as the tunable filter may vary its transmission while these rows are exposed to input light. Based on the time pattern of the EFT variation, the image data of the rows of pixels can be expressed by weighting the effect of each EFT based on the relative time of exposure corresponding to each EFT.

    [0081] It is generally known that rolling shutter image sensors acquire an image row by row with a predetermined exposure time and a typically short readout time per row (15-40 s). Acquiring a series of frames (for example, video or burst mode) is executed such that the first row of each successive frame is read right after the last row of the preceding frame. Thus, the exposure scheme of a frame (or multiple frames) is a parallelogram within the row-time plane.

    [0082] FIG. 6 illustrates the concept of rolling shutter type detector and readout thereof. In this non limiting example, the exposure time is 9 time units, while the readout time takes 1 time unit; this is marked for the top row. Thus, in a single time unit, after the beginning of exposure for the first row, the second row is being exposed to light, such that it is being readout 1 time unit after the readout of the previous row. As shown, the filter profile varies with time, in this example each 10 time units, resulting in that different rows are exposed to input light of different wavelength profiles. For each row, the collected data corresponds to input light of weighted filtering based on the time in which the tunable filter has a first transmission profile and time in which it has a second transmission profile (and third and fourth when applied).

    [0083] Thus, each row-image is acquired with a varying mixture of two or more filter modes. For example, the first row is acquired with a single filter state (red state), whereas the last row acquires the light with a white state. All other rows acquire the image with a varying linear mixture of filter states (red and white).

    [0084] If such a system is used to image a rectangular object of N rows, the image contains N different weighted mixtures of red and white filter states, which allows reconstructing the spectrum of the object. Such spectrum reconstruction may provide N spectral bands.

    [0085] The object identification can be done by image segmentation (e.g. to determined that a single object is detected and not multiple objects). Various object detection algorithms may be used.

    [0086] The Spectral Reconstruction utilizes data about the number N of row identified as part of the object. The system determines a cross-section of the intensity along a vertical line within the image, to obtain the following (for object occupying 4 rows in the image data):

    [00007] I 1 = ( 1 T 1 .Math. .Math. R + T 1 .Math. .Math. W ) .Math. ( T 1 .Math. .Math. R .Math. F R .Math. E + T 1 .Math. .Math. W .Math. F W .Math. E ) I 2 = ( 1 T 2 .Math. .Math. R + T 2 .Math. .Math. W ) .Math. ( T 2 .Math. .Math. R .Math. F R .Math. E + T 2 .Math. .Math. W .Math. F W .Math. E ) I 3 = ( 1 T 3 .Math. .Math. R + T 3 .Math. .Math. W ) .Math. ( T 3 .Math. .Math. R .Math. F R .Math. E + T 3 .Math. .Math. W .Math. F W .Math. E ) I 4 = ( 1 T 4 .Math. .Math. R + T 4 .Math. .Math. W ) .Math. ( T 4 .Math. .Math. R .Math. F R .Math. E + T 4 .Math. .Math. W .Math. F W .Math. E )

    where: [0087] T.sub.ij is the exposure time for filter state j (Red/White) for i-th row; [0088] F.sub.R/F.sub.w is the spectral transmission profiles of the filter states (resolution of 4 spectral bands, in general of N spectral bands); [0089] E is the spectral profile (of the object), for reconstruction (resolution of 4 spectral bands).

    [0090] It should be noted that, as all the parameters other than the spectral profile E are either measured or known as parameters of the system, the data processing unit can apply the suitable algorithms to determine the spectral profile of the object E.

    [0091] FIG. 7 exemplifies the technique of the invention utilizing 3 rows (many more rows are generally used), marked by #1 (upper), #2 (middle) and #3 (lowest). Transmission profiles of the filter are noted herein below as blue for filter mode #1, and orange for filter mode #2. The average spectral transmittance obtained by the filter can be described by:


    T.sub.1()=t.sub.1.sup.blue.Math.T.sup.blue()+t.sub.1.sup.orange.Math.T.sup.orange()


    T.sub.2()=t.sub.2.sup.blue.Math.T.sup.blue()+t.sub.2.sup.orange.Math.T.sup.orange()


    T.sub.3()=t.sub.3.sup.blue.Math.T.sup.blue()+t.sub.3.sup.orange.Math.T.sup.orange()

    where: T.sup.blue() is the spectral transmittance of the blue mode and T.sup.orange() is the spectral transmittance of the orange mode; t.sub.1.sup.blue is the exposure time through the blue filter for row #1, and accordingly for the other rows and other filter mode.

    [0092] Because of the parallelogram structure, the following applies:


    t.sub.1.sup.bluet.sub.2.sup.bluet.sub.3.sup.blue, t.sub.1.sup.oranget.sub.2.sup.oranget.sub.3.sup.orange

    [0093] In view of the above, the inequality T.sub.1()T.sub.2()T.sub.3() provides unique spectral transmittance for each row. Therefore, for objects large enough (10's of rows, for example40) and with uniform emitted spectra the image data contains spectral information of the object as if it was measured through different spectral filters in a number corresponding to the number of relevant rows. This provides N (e.g. 40) different measurement of the object's spectrum and allows to determine its spectral profile.

    [0094] As indicated above, the total acquired signal is:


    I.sub.1=[t.sub.1.sup.blue.Math.T.sup.blue()+t.sub.1.sup.orange.Math.T.sup.orange()].Math.S()


    I.sub.2=[t.sub.2.sup.blue.Math.T.sup.blue()+t.sub.2.sup.orange.Math.T.sup.orange()].Math.S()


    I.sub.3=[t.sub.3.sup.blue.Math.T.sup.blue()+t.sub.3.sup.orange.Math.T.sup.orange()].Math.S()

    where S() is the spectral profile of light emitted or reflected from the object of interest and I.sub.j is the actual measured intensity for a pixel in the j-th row. Utilizing the predetermined and measured data, the spectral profile of the object may be determined.

    [0095] Thus, the present invention provides an effective technique for spectral data reconstruction. The technique of the invention provides for single snapshot reconstruction, or, if needed, multiple-frame data reconstruction for increasing the spectral resolution.