Optical spectrometer with enhanced spectral resolution from an unregistered tristimulus detector
09736400 · 2017-08-15
Assignee
Inventors
Cpc classification
International classification
G01J3/00
PHYSICS
G01J3/46
PHYSICS
Abstract
A spectrometer includes a spectrogram, digital camera and signal processing to compensate for limits of system spatial resolution, spatial distortions and lack of precision spatial registration, limited dynamic range, The spectrogram is captured by a digital camera, and the corresponding image is converted to a wavelength and magnitude with mitigation of optical point spread function and potential magnitude clipping due to over-exposure. The clipped portions of the signal are reconstructed using tangential adjacent point spread functions as a reference or adjacent channel ratios as reference. Multichannel camera detectors having unique response magnitude ratios per wavelength are exploited to make associated direct mappings, thereby making improvements in wavelength resolution and accuracy to up to at least one to two orders of magnitude.
Claims
1. A method, comprising: converting a multichannel spectrogram image pixel to a wavelength measurement by: converting the spectrogram image pixel to color plane coordinates, projecting the color plane coordinates from an achromatic point in the color plane to coordinates of purest form in the color plane, and selecting a resultant wavelength that has color plane coordinates closest to an intersection of the projected color plane coordinates and the coordinates of purest form in the color plane; creating a multichannel spectrogram image from at least one converted pixel; and converting the multichannel spectrogram image to magnitude by: converting each spectrogram pixel of the multichannel spectrogram image to respective wavelength and magnitude using multiple detector channels, determining a theoretical spatial mapping of each wavelength, for each theoretical wavelength, collect measured wavelengths in a spatial vicinity, and determine a maximum respective magnitude among the collection of measured wavelengths.
2. A method of converting a multichannel spectrogram image pixel to a wavelength measurement comprising: converting the spectrogram image pixel to color plane coordinates, projecting the color plane coordinates from an achromatic point in the color plane to coordinates of purest form in the color plane, selecting a resultant wavelength that has color plane coordinates closest to an intersection of the projected color plane coordinates and the coordinates of purest form in the color plane; and converting a multichannel spectrogram image to magnitude by: converting each spectrogram pixel of the multichannel spectrogram image to respective wavelength and magnitude using multiple detector channels, determining a theoretical spatial mapping of each wavelength, for each theoretical wavelength, collect measured wavelengths in the spatial vicinity, and determine the maximum respective magnitude among the collection of measured wavelengths; wherein determining a theoretical spatial mapping includes: selecting a first and a second reference wavelengths as most likely to be accurately measured, predicting theoretical spatial mappings of all wavelengths, and multiplexing between theoretical and measured wavelength according to a purity magnitude.
3. The method as recited in claim 2 wherein selecting two reference wavelengths comprises: identifying pixels with two channels having approximately equal magnitudes; and when a third or more channel exists, selecting the highest two magnitudes as the two reference wavelengths.
4. The method as recited in claim 2, wherein the multiplexing is performed by degrees, and the wavelength selected by multiplexing is a weighted sum of the theoretical and measured wavelengths.
Description
BRIEF DESCRIPTION OF THE FIGURES
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
DETAILED DESCRIPTION
(23) As described below, embodiments of the invention make significant improvements in measuring wavelength and magnitude from spectrogram images captured using relatively inexpensive tristimulus detectors. Such detectors are widely available as stand-alone RGB cameras, embedded in mobile devices, such as smart phones (iPhone, Andriod, Blackberry), cell phones, notepads (iPad, etc.), laptops and other portable computers, and as accessories to computers such as USB cameras (webcams, in microscopes, telescopes, etc.). Many of these detectors include processors on which particular operations may be performed, described in detail below.
(24) In addition, embodiments of the invention improve the effective spectral resolution beyond the limits of the system optical resolution not only due to the optical limits of the apparatus for capturing the image of the spectrogram, but also beyond the optical limits of the spectrogram being captured. The amplitude measurement improvement includes both noise mitigation and non-linear distortion mitigation. The noise mitigation is achieved from both temporal and spatial integration of the appropriate wavelength. The non-linearity mitigation includes reconstructing peaks that have been clipped due to over-exposure. Together, these improvements in magnitude dynamic range can be over an order of magnitude. For sufficiently separated line spectra, the resulting improvement in spectral resolution and accuracy can be orders of magnitude.
(25) Referring now to
(26) Processors running on the image capture device or on a spectrometer may perform processing by running operations in software running on such processors. In some embodiments functions or operations may be programmed onto an FPGA or other firmware or hardware.
(27) The RGB spectrogram image is optionally cropped in operation 44 to remove portions of the image that surround the spectrogram, thereby reducing the amount of pixels to process, for speed and/or reduced computation. For a nominally dark surround, cropping is performed by eliminating each line at the top and bottom, and each column on the right and left where all pixels are below a useful amplitude threshold corresponding to a noise floor or black. In a preferred embodiment, the cropped spectrogram has a small border of black sufficient to measure the noise floor on both sides, top and bottom. Alternatively, the cropping may be performed by removing the portions of the image that do not correlate well with the relatively saturated colors in order as is expected with a spectrogram. The cropped result is an image with mostly pure colors or black, with colors changing along the primary axis and colors being relatively constant, but with varying intensity, along the secondary axis. In an alternative embodiment, rotation of the spectrogram image is performed before or after cropping such that the primary axis is parallel to image rows or lines, and the secondary axis is parallel to the image columns.
(28) Next, two types of nonlinearity of the spectrogram are compensated in an operation 46 as shown in more detail in
(29) As per most digitally encoded images, a gamma power function is used. So in order to apply linear operations such as integration, scaling, etc. to each channel, the inverse of the gamma power function must be first applied. For example, for sRGB, the linear representations, Rlinear, Glinear and Blinear are calculated according to well known techniques as follows:
(30) If Rlinear <=0.03928
(31) Rlinear=R+12.92
(32) else
(33) Rlinear=((R+0.055)/1.055).sup.2.4
(34) If Glinear <=0.03928
(35) Glinear=G+12.92
(36) else
(37) Glinear=((G+0.055)/1.055).sup.2.4
(38) If Blinear <=0.03928
(39) Blinear=B+12.92
(40) else
(41) Blinear=((B+0.055)/1.055).sup.2.4
(42) Next, any clipping is mitigated as shown in
(43) If any portion of a channel is clipped, clipping is located for each x location. In other words, for each location along the principal (x) axis, the locations along the secondary (y) axis of the start, clipStart(x) and end, clipEnd(x), of clipping are saved. For example,
(44) Referring again to
(45) The two principal methods of clip mitigation are: A) an adjacent unclipped column or mean of consecutive unclipped columns adjacent to and within the same channel of the clipped portion 78, or B) the mean ratios of the unclipped portion of the set of Rlinear, Glinear and Blinear channels of the top and bottom portion of the clipped column 84. For the second embodiment, as an example, for each column of pixels in 82, the mean triplet {Rlinear, Glinear, Blinear} is calculated for the same column (same x value) in the combination of above 94 and below 84 the clipped portion 82. In other words the mean of nearby unclipped image segments is calculated for each channel, Rmuc, Gmuc, Bmuc. Then, for portions of the image where only Rlinear is clipped within 82, and at least one other channel is not clipped, the larger unclipped channel is the reference channel and the corresponding column is used as the local reference column within 82. Then the portion of the clipped Rlinear signal within 82 is replaced with the scaled portion of the local reference column scaled by the ratio of the mean reference channel (Gmuc or Bmuc) with Rmuc. For example, for a given column x with clipped Rlinear(x) within 82, if Glinear(x) is the only unclipped channel or if it is larger than Blinear(x), then the clipped portion of Rlinear(x,y) is replaced with Glinear(x,y)*Rmuc(x)/Gmuc(x). Let the scale factor
s=Rmuc(x)/Gmuc(x)
and the reference column for a given x be given by
refColumn(y)=Glinear(y)
(46)
(47) For the first embodiment, the same strategy of replacing a clipped signal with a ratio scaled nearby reference signal is applied. However, instead of referencing a different unclipped channel, the column segments adjacent (that is adjacent along the secondary y axis, to the top 94 and bottom 84 of 82 in
(48) An example method of matching skirts is as follows. As shown in the block diagram of
(49) A) the optical point spread function guarantees a non-clipped samples of the digital image at the boundaries of the clipped portion of the image (i.e. 94 and 84 are above 0 and not clipped) and
(50) B) the intensity profile across the spectrum naturally includes some unclipped portion (i.e. 78 is above 0 and not clipped).
(51) Using the same example of
(52) The respective centroids are used for registration between respective reference unclipped and clipped columns. The result is shown in
s=(refColSegs.sup.T*refColSegs).sup.−1*refColSegs.sup.T*RcolSegs
where refColSegs and RcolSegs are both N×1 column vectors.
(53) Shown in
(54) Now applying this scale, s, also to the original portion of the reference 78, that is the portion corresponding to the clipped portion of Rlinear 82, we obtain a patch for and an estimate of the portion in Rlinear that was clipped. The result 160 is shown in
(55)
(56) Next a single value mean for each column is calculated for each channel. The mean value increases resolution of the relative magnitudes and reduces noise. Referring again to
(57) Next, in operation 50 shown in
(58) The tristimulus set {Rm(x),Gm(x),Bm(x)} is then converted to magnitude, saturation and wavelength. The tristimulus set is converted to wavelength in steps of A) converting to coordinates in a color plane, B) projecting the color plane coordinates to coordinates of purest form in the color plane and C) selecting the wavelength whose coordinates come closest to those of the projection in step 2. One embodiment uses the very commonly used pseudo-physiological CIE1931 xy color plane.
(59) So as not to confuse the spectrogram primary axis x, with the x of the color plane coordinate system, the remaining text of the invention details will substitute the spectrogram primary axis index variable x with n, as in {Rm(n), Gm(n), Bm(n)}. Thus the corresponding CIE1931 {x,y} values are {x(n),y(n)}.
(60) The conversion of {Rm,Gm,Bm} to magnitude, saturation and wavelength is performed as follows. First, following each of the {Rm,Gm,Bm} values along the centroid curve are converted to CIE1931 {x,y} values using the respective colorimetry conversion operation 52 of
(61) As depicted by the marked up plots of
(62) The slope of the projected line 220 of
slope(n)=(y(n)−yw)/(x(n)−xw)
where {xw,yw} are the CIE 1931 coordinates for the reference white point for the camera colorimetry. In the case of sRGB, {xw,yw}={0.3127, 0.3290}.
(63) The corresponding angle with the horizontal (x) axis of
angle(n)=atan(slope(n))+angleOffset;
(64) Then angle(n) is matched to the angle in a table. The table has a column each for angles, x coordinates, y coordinates and wavelength. The angles are calculated from the arc tangent, atan, of the slope of the line between {xw, yw} and the respective {x,y} coordinates of pure monochromatic light of the given wavelength of each row of the table. The CIE 1931 {x,y} coordinate and wavelength data for the pure monochromatic light curve is given by Table 1 of section 3.3.1 of Gunter Wyszecki, W. S. Stiles, “Color Science: Concepts and Methods, Quantitative Data and Formulas, 2nd Edition,” 1982, John Wiley & Sons, NY, that is hereby incorporated by reference herein. The table with precalculated angles from the CIE 1931 {x,y} coordinates is used for expediency for converting angle to wavelength.
(65) Thus, each {Rm,Gm,Bm} is converted to CIE 1931 {x,y} and projected to pure light 226. The corresponding wavelength, lambda, given by the aforementioned reference table is used. In an alternative embodiment, linear interpolation between corresponding angles in the table may be used to determine wavelength at finer resolution.
(66) Again referring to
(67) In one embodiment, the purity estimation values are used to establish nominal mapping between the spectrogram primary axis n and the remaining wavelengths. In typical spectrogram designs, there may be a non-linear relationship between spatial offset and wavelength. Cosine correction is often included to compensate. For the case where a direct image capture of the spectrogram is taken, such compensation may need to be performed through image processing. The lambda values with the highest respective purity estimation values are used to established reference (control) points for registering the corresponding portions of the cosine uncorrected spectrogram, and then the remaining wavelengths follow cosine correction established using known methods. Typically the best of the purest wavelengths to use for this purpose are first near yellow, where red and green channels are equal and second near cyan, where green and blue are equal. These two wavelength points in the spectrum tend to be especially useful for this purpose because A) the points where red and green are equal tend to be at mid-range sensitivities for two channels, where they are less likely to suffer from low signal-to-noise ratio, nor from clipping or other high level related distortion and B) the human vision system is particularly sensitive to wavelength differences near yellow and cyan, where the corresponding cones have high derivatives of sensitivity with respect to wavelength, and thus for commercial success, cameras must be particularly accurate in these regions. Yellow is better than cyan because in a typical colorimetry (including the example sRGB) yellow is fairly well saturated for the case where channels are R=G, B=0 (the yellow point on the red to green primary line within the CIE 1931 xy plane), whereas the corresponding cyan line, G=B, R=0 (cyan point on the green to blue line within the CIE 1931 xy plane) is not as saturated and the human eye is slightly less sensitive to the change in wavelength. The human eye is much less sensitive to changes in wavelengths at extremes of the visual spectra as well as in the middle of green.
(68) Accordingly, an example of measured wavelength, measWlen(n) as 250, and cosine corrected theoretical mapping of measured wavelength, theoryWlen(n) as 252, using yellow and cyan measured points is shown in plots vs n in
(69) The RGB values are also converted to a magnitude:
magnitude(n)=|R(n),G(n),B(n)|=sqrt(R(n).sup.2+G(n).sup.2+B(n).sup.2)
(70) Next, since wavelength typically is not a linear function of n, and a spectrometer produces magnitude vs wavelength, the next step is to determine magnitude as a function of wavelength. Note that limits in optical resolution, optical blur, cause a single essentially pure wavelength of light to be spread, and thus measured, across a span of the spectrogram primary axis n. For example, for the case where a single spectral line is alone in the spectrogram, the optical point spread function of the system will spread this wavelength of light spatially. Most applications are especially interested in wavelength and magnitude, typically with particular value given to magnitude peaks, and no particular value given to information to be gleaned from the optical point spread function. Accordingly, for each wavelength, many magnitudes may be measured across n. Among these many magnitudes for a given wavelength, magnitudes measured far from the expected location (after registration above) of the spectrogram are generally ignored since they are likely stray light or in some other way erroneous. Of the remaining magnitudes measured for the given wavelength, the maximum is taken for that wavelength. Thus the maximum magnitude within the vicinity of the theoretical location (once mapped accordingly to the above method) is used as the measured magnitude for a given wavelength(n).
(71) If (measWlen(n1)==measWlen(n2)) and (|n1−n2|<ndiffMax)
(72) then mag(measWlen(n1))=max(magnitude(n1),magnitude(n2))
(73) where measWlen is the measured wavelength and
(74) ndiffMax corresponds to the expected optical point spread function window width in sample units n.
(75) For spectral lines this is a preferred embodiment. For broadband spectra, the measured magnitude vs. theoretical wavelength is a preferred embodiment. In the preferred embodiments, the purity estimate values are used to cross-fade between these two methods of determining magnitude for a given wavelength.
(76)
(77)
(78) Embodiments of the invention may be used to make devices such as: A) smart phone visible spectrometer B) smart phone Raman spectrometer and C) an infra-red (IR) spectrometer from a commercially available electronic camera with altered optical filters. In one embodiment, a Raman spectrometer includes a smart phone, a means of attaching and aligning the smart phone to the spectrogram housing such as a bracket or holder, a Raman excitation laser and a photo-detector trigger of the laser. The laser is turned on when the photo-detector senses the smart phone flash. In these two examples, the imaging device and spectrometer may be either mounted or not mounted.
(79) Embodiments of these may have spectrometer stimulus control via a camera flash as shown in
(80)
(81) Although the flash 312, flash detector 334, and light source 332 are illustrated in
(82) In some embodiments the energy source 332 is a laser and the spectrometer 330 is a Raman spectrometer. In some embodiments the energy source 332 is a broad-band infra-red source and the spectrometer 330 is an infra-red (IR) spectrometer. In some embodiments the energy source 332 is a broad-band ultra-violet source and the spectrometer 330 is an ultra-violet (UV) spectrometer. In some embodiments the energy source 332 is a broad-band IR-VIS-UV source and the spectrometer 330 is an ultra-violet (IR-VIS-UV) spectrometer. In some embodiments the energy source 332 is a broad-band terahertz source and the spectrometer 330 is a terahertz spectrometer. In some embodiments the energy source 332 is an electric arc or corona discharge source and the spectrometer 330 is an electric arc or corona discharge spectrometer, respectively.
(83) Although specific embodiments of the invention have been illustrated and described for purposes if illustration, it will be understood that various modifications may be made without departing from the spirit and scope of the invention.