Full resolution color imaging of an object
11741729 · 2023-08-29
Assignee
Inventors
- Jens H. Jorgensen (Rochester, NY)
- Michael W. LaCourt (Spencerport, NY)
- Donald J. Moran, Jr. (Rochester, NY)
- David J. Hawks (Rochester, NY)
- Joseph Wycallis (Pittsford, NY)
- Stephen C. Arnold (Honeoye Falls, NY)
Cpc classification
G01N21/255
PHYSICS
H04N25/133
ELECTRICITY
H04N23/74
ELECTRICITY
International classification
G06V20/69
PHYSICS
G01N21/25
PHYSICS
H04N23/74
ELECTRICITY
Abstract
The invention relates generally to both a method and apparatus for the creation of full resolution color digital images of diagnostic cassettes or objects of interest using a gray-scale digital camera or sensor combined with time sequential illumination using additive primary colors followed by post exposure digital processing. Such procedures and equipment is of significant economic value when employed in situations such as diagnostic clinical analyzers where space is limited and image quality requirements are high.
Claims
1. A method of constructing a monochrome digital image of an object of interest comprising: optically obtaining a red, green, and blue color plane of image data using a grayscale digital camera or sensor; selecting one color plane for subsequent manipulation; creating a pixel matrix of the selected color plane, each element of the pixel matrix being indicative of a reflection intensity of a corresponding pixel of the selected color plane; multiplying, element by element, the pixel matrix by a pre-defined color weighting matrix producing a weighted selected color plane; and multiplying said weighted selected color plane by a pre-defined selected gain scalar producing a final selected color plane which is said constructed monochrome digital image.
2. The method of claim 1, wherein the object of interest is a diagnostic cassette.
3. The method of claim 1, wherein the selected color plane is the red color plane.
4. The method of claim 1, wherein the selected color plane is the green color plane.
5. The method of claim 1, wherein the selected color plane is the blue color plane.
6. The method of claim 1, wherein the constructed monochrome digital image is of full resolution.
7. The method of claim 1, wherein the constructed monochrome digital image is used as a digital image to determine the presence or absence of an agglutination reaction.
8. The method of claim 1, further comprising illuminating the object of interest with light of a selected wavelength.
9. The method of claim 8, wherein the light of the selected wavelength is one of red wavelength light, green wavelength light, or blue wavelength light.
10. The method of claim 8, wherein the constructed monochrome digital image is used as a digital image to determine the presence or absence of an agglutination reaction.
11. A method of constructing a monochrome digital image of an object of interest comprising: illuminating the object of interest with light of a selected wavelength; optically obtaining a color plane of image data using a grayscale digital camera or sensor; creating a pixel matrix of the color plane, each element of the pixel matrix being indicative of a reflection intensity of a corresponding pixel of the color plane; multiplying, element by element, the pixel matrix by a pre-defined color weighting matrix producing a weighted selected color plane; and multiplying the weighted selected color plane by a pre-defined selected gain scalar producing a final selected color plane which is the constructed monochrome digital image.
12. The method of claim 11, wherein the object of interest is illuminated by at least one of a source of front illumination or a source of rear illumination.
13. The method of claim 12, wherein the source of front illumination and/or the source of rear illumination is configured to emit red, green, and blue wavelength light.
14. The method of claim 11, wherein the object of interest is a diagnostic cassette.
15. The method of claim 11, wherein the constructed monochrome digital image is of full resolution.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
(10) While the present invention is described with respect to the preferred embodiments described below and shown in the figures, the present invention is limited only by the metes and bounds of the claims that follow.
(11) The apparatus and methods described herein enable the creation of a color digital image of preferably full resolution using a gray-scale digital camera or sensor 100.
(12) The benefits of the apparatus include a reduced unit manufacturing cost when compared to the equivalent process using a full color digital camera or sensor and no reduction in image quality due to the use of a color filter array to produce a color image. Another important benefit is the use of a gray-scale digital camera or sensor 100 allows for a shorter and more compact optical path suitable for confined spaces.
(13) For a general understanding of the disclosed technology, reference is made to the drawings. In the drawings, like reference numerals have been used to designate identical elements. In describing the disclosed technology, the following term(s) have been used in the description.
(14) The term “image” or “picture” refers to a two-dimensional light intensity function optionally having x-axis and y-axis position coordinates with a numerical value that is proportional to the brightness or gray level of the image at the specified coordinate point.
(15) The term “analog” or “analogue” refers a picture or image that is continuous both spatially in two-dimensions and in brightness or intensity.
(16) The term “digital” refers a picture or image that is digitized to a predefined number of levels both spatially in two-dimensions and in brightness or intensity. A digital image may be represented by a vector or matrix whose row and column indices identify a point in the image and the corresponding vector or matrix numerical value identifies the brightness or intensity at that point.
(17) The term “intensity” refers to the amount of light reaching a digital camera or sensor such that the higher the relative output value the greater the number of photons reaching the digital camera or sensor. Intensity is commonly associated with digital pictures or images.
(18) The term “density” refers to the amount of light reaching a digital camera or sensor such that the higher the relative output value the fewer the number of photons reaching the digital camera or sensor. Density is commonly associated with photographic pictures or images.
(19) The term “reflection intensity” refers to the amount of light received by a digital camera or sensor where the light path originates at a source and reverberates or bounces or is reflected off an object of interest, subsequently arriving at the digital camera or sensor.
(20) The term “transmission intensity” refers to the amount of light received by a digital camera or sensor where the light path originates at a source and proceeds through an object of interest, subsequently arriving at the digital camera or sensor.
(21) The term “contrast” refers to the difference in the color and brightness of the object and other objects within the same field of view. The maximum contrast of an image is the contrast ratio or dynamic range.
(22) The term “dynamic range” or equivalently “contrast ratio” refers to the ratio of the luminance of the brightest color (white) to that of the darkest color (black) that the system is capable of producing.
(23) The term “luminance” refers to a photometric measure of the intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle.
(24) The term “red” refers the color perceived by humans corresponding to the longer wavelengths of visible light, generally in the wavelength range of substantially 630-740 nanometers.
(25) The term “green” refers the color perceived by humans corresponding to visible light having a spectrum dominated by energy generally with a wavelength of substantially 520-570 nanometers.
(26) The term “blue” refers the color perceived by humans corresponding to the shorter wavelengths of visible light, generally in the wavelength range of substantially 440-490 nanometers.
(27) The term “monochrome” refers to images in one color or shades or tones of one color. A black-and-white image, composed of shades of gray, is an example of a monochrome image.
(28) The term “gray-scale” refers to a digital image in which the value of each pixel is a single sample, that is, it carries only intensity information. Images of this sort, for example, black-and-white, are composed exclusively of shades of gray, varying from black at the weakest intensity to white at the strongest intensity.
(29) The term “plane” or “color plane” refers to a set of intensities or densities associated with a single color which may be the result of a reflection process, transmission process, or a combination of both.
(30) The term “pixel” refers to a discrete spatial component of a digital picture or image and is short for picture element.
(31) The term “full resolution” refers to a monochrome or color digital image having maximum information content associated a specified number of pixels where, for example, a rectangular digital image the total number of pixels is the number of rows of pixels multiplied by the number of columns of pixels.
(32) The term “reduced resolution” refers to a monochrome or color digital image where the information content has been reduced by the application of a mathematical algorithm to the raw pixel data, such as interpolation or low pass filtering, which has been used to generate output pixel values. While the number of pixels in the reduced resolution digital image may be spatially the same as a full resolution digital image, that is the spatial size of the two digital images are the same, the information content of the reduced resolution digital image is significantly lower.
(33) The term “spatial resolution” refers to the physical size or dimensions of a digital image. For a rectangular digital image, the spatial resolution is the number of pixels in a row by the number of pixels in a column. The common unit of resolution for digital cameras or sensor is megapixels.
(34) The term “suitably capture” or “suitably captured” refers to methods that enable obtaining high quality imagery of diagnostic cassettes or objects of interest.
(35) The term “depth of field” refers to the distance between the nearest and farthest objects in a scene that appear acceptably sharp in an image. Although a lens can precisely focus at only one distance at a time, the decrease in sharpness is gradual on each side of the focused distance, so that within the depth of field, the amount of unsharpness is imperceptible under normal viewing or imaging conditions. All objects within the depth of field of an imaging or optical system are considered to be in focus.
(36) The term “Hadamard multiplication” refers to a type of matrix multiplication where the matrices being multiplied have the same number of rows and columns, as does the resultant or output matrix, and the elements of the resultant or output matrix, for a specific row and column, are formed by the multiplication of the values of the elements having the same specific row and column. This is also known as element-by-element matrix multiplication.
(37) The term “duty cycle” refers to the time that an entity spends in an active state as a fraction of the total time under consideration. For a light emitting diode (LED), the duty cycle would be the fraction of the time that the LED has actually been emitting light since a predetermined starting time. The amount of time that an LED has actually been emitting light is the numerical value of the duty cycle (a number between 0.0 and 1.0) times the time elapsed since the predetermined starting time.
(38) The term “diagnostic cassette” refers to test elements that are commonly defined by a flat planar substrate having a plurality of transparent microtubes or microcolumns that define test chambers. A predetermined quantity of inert bead or gel material is added to each of the microtubes. This inert material may be coated with an antibody or an antigen or provided with a carrier-bound antibody or antigen or with specific reagents. Typically, a foil wrap is used to cover the top of the card or cassette, thereby sealing the contents of each microcolumn until the time of test. The foil wrap is pierceable or otherwise removed to enable aliquots of patient sample and/or reagents to be added to each of the microtubes, either manually or in an automated apparatus.
(39) The term “diagnostic analyzer” or “diagnostic clinical analyzer” refers to a semi-automated or automated apparatus composed of various subsystems including a patient sample handler, a test vessel (such as a diagnostic cassette) handler, an incubator, a centrifuge, a pipette for obtaining and dispensing reagents and patient samples, and a means to measure and quantify the result of specific tests on the patient sample among others.
(40) One aspect of the invention is directed to a method and apparatus for producing color digital images at full resolution using a gray-scale digital camera or sensor 100. A primary benefit of the technology is reduced unit manufacturing cost in comparison to the use of a color digital camera or sensor, producing equivalent digital resolution, when employed as a component of a diagnostic clinical analyzer. Another benefit is that the resulting color image quality produced by this method using the gray-scale digital camera or sensor 100 is of higher quality than an image produced by a color digital camera or sensor, of equivalent spatial resolution, employing a color filter array because of the loss of effective resolution (resulting in reduced resolution) due to the image sensor data manipulation required in the latter case. Color filter arrays are described by Bryce E. Bayer in U.S. Pat. No. 3,971,065 entitled “Color imaging array” which is hereby incorporated by reference in its entirety. Yet another benefit is that the smaller physical size of the gray-scale digital camera or sensor 100, as compared to the size of a color digital camera or sensor of equivalent resolution, results in a shorter optical path more suitable for confined spaces. The process of producing a diagnostic result generally requires the transport of the diagnostic cassette to accept the patient sample and diagnostic reagents plus potential movement to a region of higher temperature (incubation) or to an apparatus to apply centrifugal force. At the end of the incubation and centrifugation processes, the diagnostic cassette or object of interest 102 can be held in one or more fixed positions. At that point, sequential illumination can be employed to obtain a full resolution color digital image of the front and of the back of the diagnostic cassette or object of interest 102 and its contents.
(41)
(42) Further details of a preferred mechanical holder 104 can be found in the commonly assigned co-pending application of Robert Jones and Lynn Willett entitled “An Apparatus for Gripping and Holding Diagnostic Cassettes” (Ser. No. 61/545,651), filed Oct. 11, 2011, which is hereby incorporated by reference in its entirety.
(43)
(44)
(45)
(46)
(47)
(48)
(49)
(50)
(51) The above apparatus is appropriately configured to allow for a calibration and image processing method capable of coping with a number of environmental and equipment-related issues. These issues are most conveniently organized into four tiers as follows: Tier 1: Electrical component variability Variations in gray-scale digital camera or sensor sensitivity including illumination variability Tier 2: Variations in ambient lighting Variations in input power Obtaining required front to back image contrast Tier 3: Electrical component degradation over time Electrical component temperature sensitivity Output image color requirements Tier 4: Component changes due to replacement or due to a change in component manufacturer
Imaging Subsystem Calibration
(52) A tier 1 calibration of the imaging subsystem takes place shortly after manufacture in the factory. Electrical component variability is addressed via the use of mechanically modifiable resistors (for example, a potentiometer which is a three-terminal resistor with a sliding contact that forms an adjustable voltage divider) and instrumentation such that a sequence of predefined test signals results in a sequence of predefined test results. Variations in gray-scale digital camera or sensor 100 sensitivity including illumination variability are addressed via the determination of a matrix of multiplicative weights. This matrix of multiplicative weights has the same row and column dimensions as the output matrix of pixels from the digital camera or sensor 100 such that a specific weight is applied to a specific image via Hadamard multiplication.
(53) A tier 2 calibration takes place when the diagnostic analyzer is moved to its production location. Variations in input power are managed via the use of a Texas Instruments TLC5923 integrated circuit (for details see, Kouzo Ichimaru, “Temperature compensated constant-voltage circuit and temperature compensated constant-current circuit,” U.S. Pat. No. 5,430,395 which is hereby incorporated by reference in its entirety) which implements a temperature compensated constant current circuit that also incorporates error information circuitry allowing the detection of broken or disconnected LED devices. Obtaining the required front to back image contrast is obtained by adjusting the relative amount of power sent to the front and back front illumination light emitting diode circuit boards, 602 and 603, respectively. Based upon the green image, the ratio of the average intensity of the front illuminated image compared to the average intensity of the back illuminated image is set to a predetermined value. This provides the required front to back contrast. Variations in gray-scale digital camera or sensor 100 sensitivity including illumination variability are addressed by repeating the tier 1 calibration obtaining a new matrix of multiplicative weights. This compensates for variations in gray-scale digital camera or sensor 100 sensitivity including illumination variability and also compensates for variations in ambient lighting plus any variations in illumination intensity caused by the first two steps of the tier 2 calibration procedure.
(54) A tier 3 calibration or more precisely, compensation for electrical component degradation over time is achieved by exploiting known hardware behavior. For example, it is well known that light emitting diodes produce less light output per unit input power as a function of the number of hours of use. In particular, this decrease in light output as a function of the cumulative time the diode has been emitting light, based on the duty cycle, can be modeled and used to obtain a first order image correction matrix to maintain the imaging subsystem in calibration over a longer time span. The image correction matrix elements would vary as a function time depending upon the total number of hours that the LED has been emitting light. For details, see Anthony P. Zygmunt, Chein Hsun Wang, and J. Anthony Spies “System and Method for LED Degradation and Temperature Compensation,” US 2010/0007588 which is hereby incorporated by reference in its entirety. Compensation for electrical component temperature sensitivity is achieved via the Texas Instrument TLC5923 integrated circuit as referenced above.
(55) A tier 4 calibration consists of performing tier 1, tier 2, and tier 3 calibrations again in sequence.
Image Processing and Reconstruction Procedure
(56) Generally occurs after all calibration procedures have been executed and is the normal operating mode of the imaging subsystem.
(57) The prior qualitative discussion of
2.sup.8=256
intensity levels in the range of [0, 255]. It is also common to describe the range of intensity using a floating point number in the range of [0.0, 1.0] as employed in MatLab. This latter construct will be used in describing the following preferred method. Note that the exemplary image sizes used in the following examples is much smaller than real images that may have many thousands of rows and columns.
(58) Consider the following example image in matrix form:
(59)
Where there are two rows of red pixels (R), two rows of green pixels (G), two rows of blue pixels (B), two rows of black pixels (K), and two rows of white pixels (W). Upon exposure to red, green, or blue wavelength light, the reflection intensity of the above image will be recorded as a 10×10 matrix of values in the closed interval [0.0, 1.0]. For example, a uniform exposure to green wavelength light of intensity 0.8 would ideally produce the following green image plane, in reflection intensities, for a gray-scale digital camera or sensor 100:
(60)
However, achieving a uniform exposure is difficult and there could be, for example, a gradient in exposure from left to right which might produce the following green image plane:
(61)
Such non-uniformities in exposure can be handled via the use of a green weight matrix whose numerical elements would have values determined by the calibration procedure that, once applied, would be capable of producing a corrected green color plane. For exposure gradients as in A.sub.GREEN PRIME above, the following green weight matrix (with numerical values to two decimal places) could be employed:
(62)
such that a corrected exposure matrix is obtained as follows:
A.sub.GREEN CORRECTED=A.sub.GREEN WEIGHT(.*)A.sub.GREEN MEASURED
where the operation (.*) denotes the element-by-element multiplication of the matrix elements as defined in MatLab (See Getting Started with MatLab 7, September 2010, The Math Works, Inc., pages 2-24 and 2-25, which is hereby incorporated by reference in its entirety) and is also known as the Hadamard product. Note that for this particular example that
A.sub.GREEN CORRECTED=A.sub.GREEN
to a couple of decimal places, that is to say that essentially perfect correction would be obtained.
(63) In a similar manner, corrections for alternative exposure geometries can be implemented. As an additional example, suppose the green wavelength exposure was a point source such that maximum intensity from the source was directed to the center of the object being imaged. The point source exposure (in intensity) would then have a decreasing gradient, in a radial direction, to pixels at increasing distances from the center of the object being imaged. A weight matrix in this case would have the lowest numerical values at the center with increasing numerical values as a function of the distance from the center to compensate for the radial falloff in exposure intensity.
(64) In addition to exposure geometry corrections, other digital image corrections such as compensation for the long-term degradation of output intensity of LEDs as a function of age and/or compensation for the spectral response or sensitivity of the gray-scale being employed. In any event, the resulting weight matrices can be combined into a single weight matrix to account for all corrections being employed as follows:
A.sub.WEIGHT=A.sub.CORRECTION #1(.*)A.sub.CORRECTION #2(.*)A.sub.CORRECTION #3
where the (.*) operation denotes the multiplication of the A.sub.GREEN GAIN scalar by each element of the resulting Hadamard matrix multiplication on the right. Further corrections are also possible, for example, it is possible that both the peak relative intensity and the area under the relative intensity response or sensitivity curve could vary significantly as a function of light wavelength. Hence, for a specific intensity exposure, the reflection intensity of red might only be a fraction of the reflection intensity of green or blue. Likewise, for a gray-scale digital camera or sensor 100, the spectral response or sensitivity to red light might also only be a fraction of the reflection intensity of green or blue. This variability is most easily compensated for by having a scalar gain correction for each wavelength exposure. Specifically, green exposure reflection intensity combined with a green spectral response or sensitivity of the gray-scale could be corrected by applying a gain scalar to the previously determined exposure correction equation for geometry as follows:
A.sub.GREEN TOTALLY CORRECTED=A.sub.GREEN GAIN(.*)A.sub.GREEN WEIGHT(.*)A.sub.GREEN MEASURED
where the (.*) operation denotes the multiplication of the A.sub.GREEN GAIN scalar by each element of the resulting Hadamard matrix multiplication on the right. In aggregate, the correction processing is carried out plane-wise as follows:
A.sub.RED TOTALLY CORRECTED=A.sub.RED GAIN(*)A.sub.RED WEIGHT(*)A.sub.RED MEASURED
A.sub.GREEN TOTALLY CORRECTED=A.sub.GREEN GAIN(*)A.sub.GREEN WEIGHT(*)A.sub.GREEN MEASURED
A.sub.BLUE TOTALLY CORRECTED=A.sub.BLUE GAIN(*)A.sub.BLUE WEIGHT(*)A.sub.BLUE MEASURED
(65) It will be apparent to those skilled in the art that various modifications and variations can be made to the methods and processes of this invention. Thus, it is intended that the present invention cover such modifications and variations, provided they come within the scope of the appended claims and their equivalents.
(66) The disclosure of all publications cited above is expressly incorporated herein by reference in their entireties to the same extent as if each were incorporated by reference individually.