Method and a device for acquiring an image having two-dimensional spatial resolution and spectral resolution

10742908 ยท 2020-08-11

Assignee

Inventors

Cpc classification

International classification

Abstract

The present disclosure relates to devices and methods for acquiring an image having two-dimensional spatial resolution and spectral resolution. An example method comprises: acquiring a frame using rows of photo-sensitive areas on a sensor surface detecting incident light from an object imaged by an optical system onto an image plane, wherein rows of photo-sensitive areas are arranged to receive different wavelengths; moving the sensor surface in a direction perpendicular to a longitudinal direction of the rows; repeating the acquiring and moving for acquiring a plurality of frames recording different spectral information for respective positions on the object; and combining information from the plurality of frames to form multiple channels of an image, wherein each channel is formed based on detected light in respective rows and represent a two-dimensional image of the object for a different wavelength.

Claims

1. A camera for acquiring a hyperspectral image having two-dimensional spatial resolution comprising: at least one optical system, wherein each optical system of the at least one optical system is configured to define an image plane and direct light from an object towards the image plane; at least one sensor surface comprising photo-sensitive areas for detecting incident light from one or more parts of the object, wherein the photo-sensitive areas are arranged to receive and detect the light from the object; at least one filter comprising a filter response that defines a plurality of wavelength bands, wherein the at least one filter is arranged in relation to the at least one sensor surface such that each wavelength band spatially corresponds to one or more rows of photo-sensitive areas; a translator, wherein the translator is configured to move the at least one sensor surface (i) in the image plane in a direction perpendicular to a longitudinal direction of the rows of photo-sensitive area and (ii) at a distance corresponding to a height of photo-sensitive areas spatially corresponding to a respective wavelength band; the translator being configured for synchronized movement of the at least one sensor surface with acquiring of a plurality of frames, wherein a frame in the plurality of frames is acquired by arranging a sub-set of the rows of photo-sensitive areas in the image plane to receive incident light from the at least one optical system; and a combining unit for combining information from a plurality of frames to form multiple channels of an image and to represent a two-dimensional hyperspectral image of the object, wherein each channel is formed based on detected light in respective rows of photo-sensitive areas.

2. The camera according to claim 1, wherein a number of the one or more rows of photo-sensitive areas spatially corresponding to a given wavelength band is different for different wavelength bands.

3. The camera according to claim 1, wherein a set of adjacent wavelength bands define a spectral range of an image, and wherein the at least one sensor surface comprises a plurality of sets of wavelength bands repeated on the at least one sensor surface.

4. The camera according to claim 1, further comprising a plurality of sensor surfaces, wherein the plurality of sensor surfaces are arranged in a common sensor plane and the translator is arranged to carry the common sensor plane including the plurality of sensor surfaces.

5. The camera according to claim 1, further comprising a plurality of optical systems, wherein each optical system in the plurality of optical systems is configured to define an image circle on an image plane and wherein a plurality of image circles are defined on a common image plane.

6. The camera according to claim 1, wherein the at least one sensor surface is tilted in relation to a direction of movement of the at least one sensor surface by the translator carrying the at least one sensor surface.

7. The camera according to claim 1, wherein the camera is arranged to acquire a first set of frames for forming a first image having two-dimensional spatial resolution and first spectral information while moving the translator in a first direction and acquire a second set of frames for forming a second image having two-dimensional spatial resolution and second spectral information while moving the translator in a second direction opposite to the first direction.

8. The camera according to claim 1, further comprising an illumination source, wherein the illumination source is controllable for controlling a spectral profile of illuminated light.

9. The camera according to claim 1, wherein the translator is configured to move the at least one sensor surface a distance corresponding to a height of an integer number of photo-sensitive areas, wherein the integer number of photo-sensitive areas is less than a number of rows spatially corresponding to a given wavelength band.

Description

BRIEF DESCRIPTION OF THE FIGURES

(1) The above, as well as additional, features will be better understood through the following illustrative and non-limiting detailed description of example embodiments, with reference to the appended drawings.

(2) FIG. 1 is a schematic drawing of a device, according to an example embodiment.

(3) FIG. 2 is a schematic drawing illustrating movement of a sensor surface, according to an example embodiment.

(4) FIG. 3 is a schematic drawing illustrating information acquired in different frames, according to an example embodiment.

(5) FIG. 4 is a schematic drawing illustrating an image having two-dimensional spatial resolution and spectral resolution being formed based on the information acquired in the frames of FIG. 3.

(6) FIG. 5 is a schematic drawing illustrating information acquired in different frames, according to an example embodiment.

(7) FIG. 6 is a schematic drawing illustrating an image having two-dimensional spatial resolution and spectral resolution being formed based on the information acquired in the frames of FIG. 5.

(8) FIG. 7 is a flow chart of a method, according to an example embodiment.

(9) All the figures are schematic, not necessarily to scale, and generally only show parts which are necessary to elucidate example embodiments, wherein other parts may be omitted or merely suggested.

DETAILED DESCRIPTION

(10) Example embodiments will now be described more fully hereinafter with reference to the accompanying drawings. That which is encompassed by the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example. Furthermore, like numbers refer to the same or similar elements or components throughout.

(11) The present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which example embodiments are shown. This disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided for thoroughness and completeness, and fully convey the scope of the disclosure to the skilled person.

(12) Referring now to FIG. 1, a device 100 for acquiring an image having two-dimensional spatial resolution and spectral resolution will be described. The device 100 comprises an optical system 102, which is configured to image an object towards an image plane 104 forming an image circle in the image plane 104.

(13) The optical system 102 may comprise a number of optical components for properly imaging the object, such as apertures, stops, and lenses. The optical system 102 may be adaptable to vary e.g. focus or magnification of the optical system 102.

(14) The device 100 further comprises a sensor surface 110, which may be arranged in the image plane 104 of the optical system. Thus, the optical system 102 may be arranged to direct light from an object towards the sensor surface 110 in the image plane 104.

(15) The device 100 may comprise a plurality of optical systems 102, which may be arranged side-by-side to each form an image circle in a common image plane 104. The optical systems 102 may each have different configurations enabling imaging of an object with different optical set-ups.

(16) As will be further described below, the sensor surface 110 may be movable in the image plane 104. When the device 100 comprises a plurality of optical systems 102, the sensor surface 110 may be movable in the common image plane 104 of the plurality of optical systems 102, such that the sensor surface 110 may in different frames record light that has passed different optical systems 102 and optical set-ups.

(17) The device 100 may further comprise a plurality of sensor surfaces 110, which may be arranged in a common sensor plane. Each sensor surface 110 may be adapted for detection of a specific range of wavelengths, e.g. ultraviolet, visible or infrared light. The plurality of sensor surfaces 110 may thus enable acquiring light over a very broad range of wavelengths, which may be useful for imaging an object with a spectral resolution spanning the broad range of wavelengths.

(18) The plurality of sensor surfaces 110 may be used in combination with a plurality of optical systems 102, such that an object may be imaged with different optical set-ups, while acquiring light over a very broad range of wavelengths.

(19) Although the device 100 may comprise a plurality of optical systems 102 and a plurality of sensor surfaces 110, for simplicity and brevity the device 100 will mainly be described below with reference to a single optical system 102 and a single sensor surface 110. Unless specifically stated below, the features described will also apply to a device 100 comprising a plurality of optical systems 102 and/or a plurality of sensor surfaces 110.

(20) The device 100 may optionally comprise a light source 150 for illuminating the object, in order to provide desired lighting conditions when acquiring an image. The light source 150 may be arranged to provide illumination of specific wavelengths in order for the light to interact with the object, such as being specularly or diffusely reflected or inducing emission of light, such as through fluorescence. The sensor surface 110 may thus be arranged to receive and detect light from the object.

(21) The sensor surface 110 may comprise photo-sensitive areas 112, which may be arranged in columns and rows. The sensor surface 110 may comprise a complementary metal-oxide-semiconductor (CMOS) circuitry for arranging photo-sensitive areas 112 on the surface 110 and circuitry for controlling read-out of detection of light in the photo-sensitive area 112. The photo-sensitive areas 112 may also be referred to as pixels.

(22) The photo-sensitive areas 112 and the circuitry on the sensor surface 110 may together form an image sensor for acquiring frames of image information. Each frame may comprise information of detected incident light in at least a sub-set of rows of photo-sensitive areas 112. The image sensor may further be arranged to acquire a plurality of frames, wherein the plurality of frames may be combined to represent a two-dimensional image of the object having a spectral resolution, as will be described later.

(23) A filter 114 may be integrated on the sensor surface 110. The filter 114 may be arranged to pass specific wavelengths to rows of photo-sensitive areas 112. Thus, the photo-sensitive areas 112 in a row may all be arranged to detect the same wavelengths of light. Further, rows of photo-sensitive areas may be arranged in wavelength bands such that a plurality of rows receives the same wavelengths of light, whereas different wavelength bands receive different wavelengths.

(24) Each wavelength band may define a narrow range of wavelengths which are detected by the photo-sensitive areas 112 in the wavelength band. The wavelength bands may be a plurality of adjacent wavelength bands in a range of wavelengths. However, according to an alternative, the wavelength bands may be a plurality of selected wavelength bands from a range of wavelengths, wherein the wavelength bands are not necessarily adjacent to each other in the wavelength spectrum.

(25) Each wavelength band may define a different, pre-selected wavelength interval, which is detected by the photo-sensitive areas 112 in the wavelength band. The wavelength bands may be adapted to specific requirements, e.g. for facilitating analysis of an object for presence of a compound. The wavelength bands may alternatively comprise a number of adjacent wavelength intervals in a broad range of wavelengths allowing acquiring a two-dimensional image of an object with a spectral resolution facilitating general use of the spectral information.

(26) The sensor surface 110 may be mounted on a translator 120. The translator 120 may thus carry the sensor surface 110 and may accurately control placement of the sensor surface 110 in the image plane 104. The translator 120 may be arranged as a piezo-electric translation stage, which may be accurately controlled in order to provide an accurate placement of the sensor surface 110 in the image plane 104. Thus, the sensor surface 110 may be moved in the image plane 104.

(27) As mentioned above, the filter 114 may be integrated to the sensor surface 110 such that the filter 114 will move with the sensor surface 110 and the same row of photo-sensitive areas 112 will detect the same wavelengths of light regardless of the placement of the sensor surface 110 in the image plane 104. Alternatively, the filter 114 may also be mounted on the translator 120 or connected to the sensor surface 110, such that the filter 114 will move with the sensor surface 110.

(28) The device 100 may further comprise a control unit 130, which may be arranged to control the translator 120 and may further be arranged to control the image sensor to acquire a frame. The control unit 130 may thus be arranged to synchronize movement of the sensor surface 110 and acquiring of frames, as will be further described below.

(29) The control unit 130 may be implemented as a microprocessor, which may be programmable for controlling operation of the microprocessor. For instance, the processing unit may be a central processing unit (CPU). The processing unit may alternatively be a special-purpose circuitry for providing only specific logical operations. Thus, the processing unit may be provided in the form of an application-specific integrated circuit (ASIC), an application-specific instruction-set processor (ASIP) or a field-programmable gate array.

(30) The device 100 may also comprise a combining unit 132 for combining information from a plurality of frames to form multiple channels of an image. The combining unit 132 may be implemented in the same processing unit as the control unit 130 or in another processing unit specially adapted to combining of frames.

(31) It should be realized that one or more of the control unit 130 and the combining unit 132 may alternatively be arranged in an external unit and need not be part of the device 100. The device 100 may thus instead comprise an interface for receiving control signals from an external unit and/or transmitting information in acquired frames to an external unit.

(32) The interface may comprise a communication unit 140 for transmitting and/or receiving information to and from an external unit. The communication unit 140 may be arranged for wired or wireless communication.

(33) In some embodiments, a size of the device 100 may be critical, e.g. if the device 100 is to be used for endoscopic imaging. In such case, the control unit 130 and/or the combining unit 132 may alternatively be arranged in an external unit, such as a personal computer connected to the device 100 such that processing power is arranged externally to the device 100.

(34) The device 100 may be formed in a single housing, such that a relation between the optical system 102 and the translator 120 is well-controlled. This may also ensure that a compact assembly of the device 100 is provided.

(35) Referring now to FIG. 2, movement of the sensor surface 110 and acquiring of frames will be further explained. FIG. 2 illustrates an image circle 200 projected by the optical system 102 onto the image plane 104. The image circle is scanned by the sensor surface 110.

(36) The sensor surface 110 is moved in a direction perpendicular to a longitudinal direction of the rows of photo-sensitive areas 112, as indicated by arrow A. A plurality of frames is acquired while the sensor surface 110 is moved in the image plane 104. A row of photo-sensitive areas 112 may thus detect incident light in a number of frames, detecting light from different parts of the object in each frame.

(37) The plurality of frames may then be combined to form multiple channels of an image. Each channel may be formed based on detected light in a wavelength band and represent a two-dimensional image of the object for the wavelengths detected in the wavelength band. Together the multiple channels may form a hyperspectral cube, i.e. imaging the object in two spatial dimensions and in a third spectral dimension.

(38) The sensor surface 110 may be tilted in relation to a direction of movement of the sensor surface 110. Hence, a non-zero angle may be formed between a longitudinal direction of columns of the sensor surface 110 and the movement direction A. This implies that different rows of the sensor surface 110 may be arranged at different distances to the optical system 102 and may allow for diminishing errors e.g. due to chromatic aberrations of the optical system 102, which may cause the true image plane 104 to be at different distances from the optical system 102 for different wavelengths of light.

(39) The optical system 102 may also or alternatively be dynamically controlled such that the optical system 102 may be adapted to the wavelengths of light to be recorded in a specific frame in order to diminish errors due to e.g. chromatic aberrations.

(40) As indicated in FIG. 2, a first frame, frame 0, is acquired when the sensor surface 110 is mostly outside the image circle. In FIG. 2, only a single row of photo-sensitive areas 112, namely the row 116a leading the movement of the sensor surface 110 in the scanning direction A, receives light. Then, the sensor surface 110 is moved in the scanning direction A so that the row 116a in sequential frames receives light from different parts of the object. The sensor surface 110 is then gradually moved out of the image circle again until a last frame, frame N, is acquired when the sensor surface 110 is mostly outside the image circle again. In the last frame, only the row 116b trailing the movement in the scanning direction A receives light.

(41) Depending on the optical system 102 and sensor dimensions, a size of the scanned area may vary. However, to obtain a hyperspectral cube of a size of the sensor itself, two times the number of wavelength bands minus one frames need to be acquired, as illustrated in FIG. 2.

(42) As explained above, the device 100 may comprise a plurality of sensor surfaces 110. The plurality of sensor surfaces 110 may thus sequentially scan the image circle, whereby the object may be imaged by the optical system 102 onto a plurality of sensor surfaces 110 for recording different ranges of wavelengths. The recorded frames from the plurality of sensor surfaces 110 may be combined into a large hyperspectral cube spanning a very broad range of wavelengths.

(43) Also, the device 110 may comprise a plurality of optical systems 102. The optical systems 102 may image slightly different parts of an object, e.g. with different optical set-ups. A separate image in the form of a hyperspectral cube may be formed based on each optical system 102.

(44) If the object is moved in relation to the optical systems 102, the same part of the object may be imaged by the plurality of optical systems 102 in sequential imaging sequences.

(45) The plurality of optical systems 102 may be used for imaging in an object with different configurations, varying e.g. apertures, focal length and/or optical filters.

(46) Differences in apertures between the optical systems enable for instance the implementation of a High Dynamic Range image reconstruction. Differences in focal length between the optical systems enable scanning with different magnifications and field of views, as e.g., in a microscopy setup. Different optical filters between the optical systems enable enhancing the spectral quality by avoiding for instance spectral mixing in bands with multiple peaks. The different optical filters would also be needed when scanning with multiple sensor surfaces in different spectral ranges. A plurality of optical systems with different optical axes may enable (multi-) stereo 3D hyperspectral imaging.

(47) The plurality of frames (acquired in relation to a single optical system 102) should be acquired while the object is static, such that motion blur is not introduced into the image.

(48) According to an alternative embodiment, the device 100 is arranged to move in concert with the object, e.g. along a conveyor belt, such that the object appears static in relation to the optical system 102. Thus, the plurality of frames while a same position on the object is imaged onto a same position in the image plane 104, such that no motion blur will be introduced in the acquiring of a plurality of frames.

(49) The device 100 may be used in a start/stop mode, where translation of the sensor surface 110 is halted between frames. Thus, no motion of the sensor surface 110 occurs during acquiring of a single frame and any motion blur may thus be avoided.

(50) However, the device 100 may alternatively be used in a continuous mode. The frames may thus be acquired when the sensor surface 110 is at specific positions in the image plane 104 by providing triggering of the acquiring of a frame in synchronization with a speed of movement of the sensor surface 110. The translator 120 may move the sensor surface 110 with such precision that sub-pixel registration of the frames may be allowed. The speed of movement of the sensor surface 110 may be so low that the sensor surface 110 is not moved a distance longer than a height of a wavelength band during acquisition time. Pixel blur (i.e. a displacement in number of pixels of the sensor surface 110 occurring during image acquisition) may be controlled and the resulting image can be binned to reduce noise.

(51) In either of the above modes movement of the translator 120 needs to be accurately controlled, such that a position of a row of photo-sensitive areas 112 of the sensor surface 110 in the imaging plane 104 is accurately known. In the start/stop mode, the translator 120 may be arranged to move the sensor surface 110 a distance corresponding to a height of an integer number of photo-sensitive areas 112, e.g. corresponding to a size of a wavelength band.

(52) The light source 150 may further be controlled, e.g. by the control unit 130, such that a spectral profile (e.g. a specific wavelength of light) of the emitted light may match a sensitivity of the rows 116 of the sensor surface 110 which are arranged in the image circle.

(53) By the light source 150 being controlled by the control unit 130, synchronization of the emitted light with the movement of the sensor surface 110 may be facilitated. However, it should be realized that the light source 150 may be controlled by a separate controller, e.g. embedded in the light source 150.

(54) In one embodiment, the light source 150 may be controlled to change the spectral profile at each frame to be acquired. The light source 150 may thus be tuned to specifically match a quantum efficiency and spectral range of each wavelength band that passes the image circle. Alternatively, the light source 150 may be controlled to change the spectral profile one or a few times during scanning of the sensor surface 110 over the image circle to adjust to a continuous change of sensitivity of the wavelength bands in the image circle.

(55) In one embodiment, the light source 150 may be controlled to change in relation to different sensor surfaces 110 being arranged in the image circle. For instance, if a first sensor surface 110 is arranged to detect ultraviolet light and a second sensor surface 110 is arranged to detect visible light, the light source 150 may be controlled to emit ultraviolet light when the first sensor surface 110 is in the image circle and to emit visible light when the second sensor surface 110 is in the image circle.

(56) In another embodiment, the light source 150 may be controlled to change the illumination for different passes of the sensor surface 110 over the image circle. For instance, a broadband visible light illumination may first be used in a first scan of the sensor surface 110 over the image circle. Then, an illumination for inducing fluorescence may be used in a second scan allowing acquiring a first image of an object in visible light and a second fluorescence image of the object. This may be very useful in some applications, such as fluorescence guided surgery, where fluorescence localization algorithms require intrinsic measurements (at the excitation wavelength) and fluorescence measurements.

(57) Referring now to FIGS. 3-6, two examples of movement of the sensor surface 110 and the combination of frames into an image having spectral resolution will be given. The movement between subsequent frames is called a step-size and is quantified as the number of pixels to which the movement corresponds.

(58) In a first example, illustrated in FIGS. 3-4, the sensor surface 110 comprises 4 wavelength bands, each comprising 4 rows of pixels. The sensor surface 110 is moved using a 4 pixel step between frames. In this case, 7 frames are acquired in order to complete a data set for forming four channels each representing a two-dimensional image of the object for a specific wavelength band.

(59) It is clear from FIG. 3 that only pixel positions 13-28 contain all spectral bands and a combined image may thus be formed, as illustrated in FIG. 4, for these pixel positions. The combined image comprises spectral information of four different wavelength bands for every spatial position in the image and the combined image is as large as the size of the image sensor.

(60) In the example of FIGS. 5-6, the sensor surface 110 comprises 3 wavelength bands, each comprising 8 rows of pixels. The sensor surface 110 is moved using a 3 pixel step between frames. Here, 14 frames are acquired as illustrated in FIG. 5.

(61) Since each wavelength band comprises 8 rows of pixels, and a 3 pixel step is used, a spatial position of the object is imaged in a single wavelength band in a plurality of frames. This allows pixels at the edge of each band to be discarded, in order to avoid cross-talk between adjacent wavelength bands. Further, information relating to each spatial position is still acquired in two frames for each wavelength band. Information from a plurality of frames may be combined in several different ways. For instance, an average of the detected incident light in the plurality of frames may be used. Alternatively, a median value, a minimum value, a maximum value or a percentile value may be used.

(62) It should be realized that the above examples described in relation to FIGS. 3-6 are given in order to facilitate explanation of how a plurality of frames may be acquired and are related to each other. In practical examples, the size of the sensor surface 110 is larger and a larger number of wavelength bands may be used.

(63) In one embodiment, a sensor surface 110 comprises 128 wavelength bands of 8 pixels each (1024 pixel rows) by 2048 columns. The device 100 may then be arranged to acquire 255 frames using an 8 pixel step. Each pixel may have a height of 5.5 m, which implies that the sensor surface will in total be moved 11.22 mm. The sensor can be operated at acquiring 350 frames per second, such that a full hyperspectral cube (10242048 pixels128 wavelength bands) may be acquired in 0.72 seconds. Thus, the full hyperspectral cube may be quickly obtained, which does not set very limiting requirements on having a static object in relation to the optical system 102.

(64) The wavelength bands on the sensor surface 110 may be designed with different widths (different number of rows per wavelength band). This may be used for adjusting a signal-to-noise ratio depending on quantum efficiency of the photo-sensitive areas 112 and filter response.

(65) A set of adjacent wavelength bands define a spectral range that may be acquired by the sensor surface 110. According to an embodiment, the sensor surface 110 comprises a plurality of sets of wavelength bands repeated on the sensor surface 110.

(66) For instance, a sensor surface 110 may have 128 different wavelength bands, each covering 8 rows of the sensor surface 110. By repeating the sets of wavelength bands on a same-size sensor surface 110, the sensor surface 110 may instead have two times the same 128 bands each covering 4 rows of the sensor surface 110. Then, in order to acquire an image spectrally resolved over the 128 band, it is only necessary to move the sensor surface 110 half the distance. Thus, a rate of acquired images may be increased, in particular if the sensor surface 110 is moved in continuous mode.

(67) Further, since the wavelength bands are repeated on the sensor surface 110, acquiring of a subsequent image may be initiated while a current image is acquired. For instance, with reference to FIG. 3, while frames 5-7 are acquired for a current image, frames 1-3 for the subsequent image may be acquired. This implies that the subsequent image may be acquired quickly after the acquiring of the current image.

(68) According to an embodiment, an image may be acquired when the sensor surface 110 is moved in a first direction over the image circle. Once the entire sensor surface 110 has been scanned, another image may be acquired while the sensor surface 110 is moved back over the image circle in a second direction opposite the first direction. Thus, images may be acquired as the translator 120 moves the sensor surface 110 back and forth over the image circle. The wavelength band leading the movement of the sensor surface 110 in the first direction will be trailing the movement of the sensor surface 110 in the second direction. This implies that an order of acquiring information relating to different wavelength bands will be changed between the first and second direction. However, this may be easily handled when combining frames into an image having spectral resolution.

(69) Using a plurality of sets of wavelength bands repeated on the sensor surface 110 and acquiring images when moving the translator 120 in both the first direction and the opposite second direction may be used for acquiring images having a full spectral resolution at a very high rate. This may be used in order to acquire video-type imaging of an object or a scene with a high spectral resolution. For instance, using a sensor surface 110 having 64 wavelength bands, each of one row, repeated 8 times on the sensor surface 110, and scanning back and forth may result in a rate of about 10 images per second.

(70) Referring now to FIG. 7, a method 300 for acquiring an image will be described. The method comprises acquiring, step 302, a frame using rows of photo-sensitive areas 112 on a sensor surface 110. The method further comprises moving, step 304, the sensor surface 110 in the image plane 104 in a direction perpendicular to a longitudinal direction of the rows of photo-sensitive areas 112. The acquiring 302 of a frame is repeated for acquiring a plurality of frames, wherein the sensor surface 110 is differently arranged to the optical system 102 for different frames. Thus, different spectral information for respective positions on the object is recorded based on the sensor surface 110 in different frames.

(71) Then, information from the thus-acquired plurality of frames are combined, step 306, to form multiple channels of an image, wherein each channel is formed based on detected light in respective rows of photo-sensitive areas 112 and represent a two-dimensional image of the object for a different wavelength interval.

(72) Optionally, before initiating combining of a plurality of frames, a check may be performed whether the sensor surface 110 has been scanned over the entire surface to be scanned (e.g. over all wavelength bands), so that each spatial position of an object has been imaged onto each wavelength band on the sensor surface 110. If not, the repeating of the step 302 for acquiring a plurality of frames using different positions of the sensor surface 110 in relation to the optical system 102 may be continued in order to obtain further frames before combining of a plurality of frames is initiated.

(73) If the check finds that the desired frames have been acquired, the combining 306 of information from the thus-acquired plurality of frames may be initiated.

(74) It should be realized, however, that the step 306 of combining plurality of frames to form multiple channels of an image may be initiated before all frames have been acquired. Also, an image may be formed even if all frames are, for some reason, not acquired. Hence, if the check never finds that the entire surface to be scanned has actually been scanned, the step 306 may still be performed based on the frames that have been acquired to form an image which may lack information of some of the multiple channels.

(75) In the above, the disclosure has mainly been described with reference to a limited number of embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the disclosure, as defined by the appended claims.

(76) While some embodiments have been illustrated and described in detail in the appended drawings and the foregoing description, such illustration and description are to be considered illustrative and not restrictive. Other variations to the disclosed embodiments can be understood and effected in practicing the claims, from a study of the drawings, the disclosure, and the appended claims. The mere fact that certain measures or features are recited in mutually different dependent claims does not indicate that a combination of these measures or features cannot be used. Any reference signs in the claims should not be construed as limiting the scope.