IMAGE ANALYSIS SYSTEM AND METHOD
20200213569 ยท 2020-07-02
Inventors
- Alexander Grenov (Madison, WI)
- Damian W. Ashmead (Middletown, DE)
- Kevin K. Kim (Madison, WI)
- Francis J. Deck (Madison, WI)
- Chris Xavier Kauffold (Madison, WI)
Cpc classification
H04N9/646
ELECTRICITY
International classification
G01N21/25
PHYSICS
Abstract
An image analysis system includes a video camera that collects YUV color images of a liquid sample disposed between a capital and a pedestal, the color images being collected while a light source shines light through an optical beam path between the capital and the pedestal, and a processor adapted to i) obtain from the YUV color images a grayscale component image and a light scatter component image, and ii) obtain at least one binary image of the grayscale component image and at least one binary image of the light scatter component image.
Claims
1.-24. (canceled)
25. A spectrometer including an image analysis system, the system comprising: a. a video camera that collects images of a liquid sample disposed between a capital and a pedestal of the spectrometer, the images being collected while a light source shines light through an optical beam path between the capital and the pedestal for photometric or spectrometric measurement; and b. a processor adapted to detect any bubble in a column of the liquid sample using the images.
26. The image analysis system of claim 25, wherein the images are YUV color images, and the processor is further adapted to i) obtain from the YUV color images a grayscale component image and a light scatter component image, and ii) obtain at least one binary image of the grayscale component image and at least one binary image of the light scatter component image.
27. The image analysis system of claim 26, wherein the at least one binary image of the grayscale component image includes a first binary image of the grayscale component image obtained from applying an upper dynamic threshold and a lower dynamic threshold obtained from an interpolation between left and right background variation thresholds in the grayscale component image.
28. The image analysis system of claim 27, wherein the at least one binary image of the grayscale component image includes a second binary image of a grayscale isotropic gradient image, the grayscale isotropic gradient image obtained using the grayscale component image and a static threshold based on isotropic gradient image background noise statistics.
29. The image analysis system of claim 28, wherein the at least one binary image of the grayscale component image includes a composite binary image obtained from a combination of the first and second binary images.
30. The image analysis system of claim 29, wherein the processor is further adapted to detect location of the column of the liquid sample and location of the optical beam path from the composite binary image.
31. The image analysis system of claim 30, wherein the processor is further adapted to detect any bubble in the column of the liquid sample using both the grayscale component image and the at least one binary image of the light scatter component image.
32. The image analysis system of claim 31, wherein using the grayscale component image includes applying a ring detection filter to a grayscale isotropic gradient image obtained from the grayscale component image.
33. The image analysis system of claim 31, wherein using the at least one binary light scatter component image includes applying a morphological filter to the at least one binary image of the light scatter component image.
34. The image analysis system of claim 31, wherein the processor is further adapted to distinguish a bubble in the optical beam path from a bubble out of the optical beam path using the grayscale component image, the at least one binary image of the light scatter component image, and the calculated location of the optical beam path.
35. A method of analyzing an image, the method comprising: a. collecting images of a liquid sample disposed between a capital and a pedestal of a spectrometer, the images being collected while a light source shines light through an optical beam path between the capital and the pedestal for photometric or spectrometric measurement; b. detecting location of a column of the liquid sample and location of the optical beam path from the images; c. detecting any bubble in the column of the liquid sample using the images; and d. reporting an image analysis summary to a display.
36. The method of analyzing an image of claim 35, wherein collecting images of the liquid sample includes collecting YUV color images, obtaining from the YUV color images a grayscale component image and a light scatter component image, and obtaining at least one binary image of the grayscale component image and at least one binary image of the light scatter component image.
37. The method of analyzing an image of claim 36, wherein the at least one binary image of the grayscale component image includes a first binary image of the grayscale component image obtained from applying an upper dynamic threshold and a lower dynamic threshold obtained from an interpolation between left and right background variation thresholds in the grayscale component image.
38. The method of analyzing an image of claim 37, wherein the at least one binary image of the grayscale component image includes a second binary image of a grayscale isotropic gradient image, the grayscale isotropic gradient image obtained using the grayscale component image and a static threshold based on isotropic gradient image background noise statistics.
39. The method of analyzing an image of claim 38, wherein the at least one binary image of the grayscale component image includes a composite binary image obtained from a combination of the first and second binary images.
40. The method of analyzing an image of claim 39, further including detecting location of a column of the liquid sample and location of the optical beam path from the composite binary image.
41. The method of analyzing an image of claim 40, wherein detecting any bubble in the column of the liquid sample includes using both the grayscale component image and the at least one binary image of the light scatter component image.
42. The method of analyzing an image of claim 41, wherein using the grayscale component image includes applying a ring detection filter to a grayscale isotropic gradient image obtained from the grayscale component image.
43. The method of analyzing an image of claim 41, wherein using the at least one binary light scatter component image includes applying a morphological filter to the at least one binary image of the light scatter component image.
44. The method of analyzing an image of claim 41, further including distinguishing a bubble in the optical beam path from a bubble out of the optical beam path using the grayscale component image, the at least one binary image of the light scatter component image, and the calculated location of the optical beam path.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053] Like reference numerals refer to corresponding parts throughout the several views of the drawings.
DETAILED DESCRIPTION OF EMBODIMENTS
[0054] In the description of the invention herein, it is understood that a word appearing in the singular encompasses its plural counterpart, and a word appearing in the plural encompasses its singular counterpart, unless implicitly or explicitly understood or stated otherwise. Furthermore, it is understood that for any given component or embodiment described herein, any of the possible candidates or alternatives listed for that component may generally be used individually or in combination with one another, unless implicitly or explicitly understood or stated otherwise. Moreover, it is to be appreciated that the figures, as shown herein, are not necessarily drawn to scale, wherein some of the elements may be drawn merely for clarity of the invention. Also, reference numerals may be repeated among the various figures to show corresponding or analogous elements. Additionally, it will be understood that any list of such candidates or alternatives is merely illustrative, not limiting, unless implicitly or explicitly understood or stated otherwise. In addition, unless otherwise indicated, numbers expressing quantities of ingredients, constituents, reaction conditions and so forth used in the specification and claims are to be understood as being modified by the term about.
[0055] Accordingly, unless indicated to the contrary, the numerical parameters set forth in the specification and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by the subject matter presented herein. At the very least, and not as an attempt to limit the application of the doctrine of equivalents to the scope of the claims, each numerical parameter should at least be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the subject matter presented herein are approximations, the numerical values set forth in the specific examples are reported as precisely as possible. Any numerical values, however, inherently contain certain errors necessarily resulting from the standard deviation found in their respective testing measurements.
[0056] As the popularity of UV/Vis spectrometers such as the NanoDrop (Thermo Electron Scientific Instruments, Madison Wis.) grows, there is a demand to improve the reliability of its measurement technique that relies, as discussed above, on the surface tension of a liquid sample (drop). A problem is that the spectral measurement of the liquid drop (column) can be degraded by unevenness of the liquid column shape and its position during the measurement. The liquid column can be misshaped (skewed), off-center (relative to the light path of the instrument), can contain bubbles or other inclusions, or the liquid column can even be broken apart. Presently available instruments have no ability to automatically identify and report these problems, while the visual inspection of the liquid drop shape is very limited and unreliable. The design described herein includes a high-resolution video camera in front of the sample compartment that focuses on the liquid column and uses computer vision algorithms for automatic identification of the liquid column defects and reporting them to the instrument operator. The image analysis system also includes measurement and reporting of scattered light that is caused by bubbled liquid, which degrades the measurement. The video camera and spectrometer are synchronized together and every spectral measurement is accompanied with a column image quality metric.
[0057]
[0058] The quality of the measurement depends on the quality of the measured liquid column 9 during the time when the light beam 3 is passing through it. The quality of the column 9 is very hard to analyze visually because the gap (distance between two interface surfaces 2 and 7) is too narrow1 mm or less.
[0059] The possible column defects can be summarized in the following categories:
[0060] Skewed and off-centered column. Examples of this defect are presented in
[0061] Bubbled column of column with inclusions. See
[0062] Broken column or empty compartment (no liquid drop). This is a terminal defectno column quality measurement will be made. See
[0063] Turning back to
[0064] The camera video can be inspected by the operator of the instrument; however, a more accurate and convenient way is to use machine vision algorithms. A processor 70 is adapted to i) obtain from the YUV color images a grayscale component image and a light scatter component image, and ii) obtain at least one binary image of the grayscale component image and at least one binary image of the light scatter component image.
[0065] Scattered light is emitted while acquiring a spectrum on a column with inclusions, such as bubbles of gas or air. The blue wavelength range is the prevailing component in the scattered light, because of the strong (.sup.4) wavelength dependence of Rayleigh scattered light (shorter wavelengths (blue) are Rayleigh scattered more strongly than longer wavelengths (red)). The resulting spectrum quality can be degraded due to the loss of the beam energy that reflects from the bubbles and gets scattered. By applying machine vision algorithms, it is possible to quantitatively measure the amount of scattered light.
[0066] Although it is possible to analyze the original color image on a modern computer, it leads to unnecessary complexity and redundancy. For image analysis, two intensity-only (grayscale) images are created in one of three possible ways:
[0067] 1. When one has just one color RGB image (snapshot), the image is extracted and the following two component images are created:
[0068] a. A grayscale component (luma) image (L) is created by averaging the Red (R) and Green (G) components from the original RGB image. For every x,y-positioned image pixel, the following calculation is applied:
L(x,y)=(R(x,y)+G(x,y))/2;
[0069] b. The blue chromatic component (for the light scatter component) image (S) is created by using the original blue (B) component from the RGB image and calculating the following complementary image, for every x,y-positioned pixel as follows:
S(x,y)=max(0,B(x,y)L(x,y));
[0070] 2. In the case of the YUV image format (that is available on the Android/Linux system) the calculation of the two component images is:
[0071] a. The grayscale component image is the Y (luma) component of the original YUV image, that is: L(x,y)=Y(x,y);
[0072] b. The light scatter component image S is created by using the U-chrominance component from the YUV image and calculating the following complementary image, for every x,y-positioned pixel: S(x,y)=max(0, U(x,y)128);
[0073] 3. In case a sequence of YUV images of varying flash light is available from the camera video stream obtained according to the flowchart shown in
[0074] a. A grayscale component image is calculated as an average of all available Y.sub.i (luma) components of the original YUV images from the sequence (step 801 in
L(x,y)=(Y.sub.1(x,y)+Y.sub.2(x,y)+Y.sub.3(x,y))/3;
[0075] b. Let S.sub.i(x,y) be the light scatter (blue chromatic) intensity for the i-image in pixel (x,y) that is calculated from the U.sub.i-component using the formula above (see 2.b). Then the S(x,y) light scatter component image is calculated by taking the maximum of all available S.sub.i(x,y) for each pixel (x,y) as follows:
S(x,y)=max(S.sub.1(x,y),S.sub.2(x,y),S.sub.3(x,y));
[0076] The maximum of S.sub.i(x,y) is used to obtain the maximum scattered light that corresponds to the flash occurrence moment.
[0077] See
[0078] The following steps form the liquid column analysis algorithm:
[0079] 1. Let a grayscale image L of size MN consist of pixels g.sub.i,j, such that i[0, M1], j[0, N1]) 0g.sub.i,j255.
[0080] In other words g.sub.i,j is the image rectangular area and its value (intensity) can vary from 0 to 255.
[0081] 2. Use the extracted grayscale image L (
[0082] 3. Create a horizontal cumulative profile by summation of the absolute pixel-minus-background values along each image column. That is, by calculating .sub.i.sub.i,j where
[0083] 4. See
[0084] 5. Find the left and right edges of the instrument capital (upper part, see
[0085] 6. Extract a new ROI image that embraces the found features (the capital and the pedestal) with additional extension on both sides as shown in
[0086] 7. Apply a horizontal gradient (Sobel operator, see page 578 of Digital Image Processing, Rafael C. Gonzalez and Richard E. Woods, 2.sup.nd Ed., Prentice Hall, 2002, (hereinafter Gonzalez) the entire contents and teachings of which are hereby incorporated by reference in their entirety) filter to the extracted ROI grayscale image to find the lower horizontal capital edge 1420 and the upper horizontal pedestal edge 1410 thereby detecting a region-of-interest that includes a location of the capital and the pedestal from the grayscale component image as shown in
[0087] 8. Create a vertical cumulative profile shown in
[0088] 9. Find two main intensity peaks 1510 and 1520 on the vertical cumulative profile (
[0089] 10. Using the found boundaries from the previous step, extract the vertical part of the ROI sub-image for further processing (
[0090] 11. Find the right edge of the capital on the top of the ROI image (
[0091] 12. Continue on to finding the left edge of the capital. Apply a 135-degree diagonal gradient filter. Fit the top left set of the gradient intensity pixels with a 135-degree diagonal line segment by finding the best least-square fit.
[0092] 13. Use previously found capital diagonal edges and knowledge of the actual sizes of the capital and pedestal to extract the final ROI image that is centered with respect to both the instrument capital and pedestal (step 805 in
[0093] 14. Calculate background parameters for the left and the right parts of the image using the left background rectangular areas 2010 and 2020 and right background rectangular areas 2030 and 2040 where the background is expected (see white rectangles in
[0094] 15. A thresholding technique is then applied to create convenient binary images (an example is shown as a stippled image overlay in
[0095] 16. Create a first binary image of the grayscale ROI image by applying dynamic thresholding that uses an interpolation between the left and right background thresholds to calculate a threshold for each pixel individually.
[0096] 17. The highlighted stippled area 2050 shown in
[0097] 18. Create a second binary image by using an isotropic gradient of the same grayscale component image and apply static thresholding based on the gradient background statistic. The isotropic gradient image is assumed to have a zero mean value, so the standard deviation is calculated using only the selected left and right sets of rectangles. A statistical three sigma () rule is used to create the threshold for binarization. The resulting second binary image is shown in
[0098] 19. Combine the two binary images: the first from the grayscale component image and the second from the isotropic gradient image (above) to produce a composite binary image. The combination enables creating a more complete foreground binary image from disconnected foreground segments.
[0099] 20. For further column shape detection and filling of the foreground cavities, two artificial foreground stripes 2410 and 2420 are added, one on the top 2410 and another one on the bottom 2420 (by setting binary pixels to value of 1).
[0100] 21. Morphological operations and a hole filling operation are used to fill foreground holes and smooth rough edges. See Gonzalez, pages 528-536. In
[0101] 22. The two artificial stripes are removed by setting binary pixels on the top and bottom horizontal edges to background value (0, transparent). Then, a sieving filter is applied for removing small features that account for noise (step 806 in
[0102] 23. Connected foreground object(s) are extracted by using a connected component extraction algorithm (see Gonzalez, page 536), thereby evaluating the integrity of the column of the liquid sample from the composite binary image. Normally, just one object matches the normal liquid column. If there are two or more objects, then it is a broken column case (see
[0103] 24. Calculate the area of the column shape (in pixels) from the detected binary object 2610, as shown in
[0104] Alternative image analysis methods for liquid column shape detection include detecting the object of interest, such as the liquid column, by using edge detection operators for extracting the object contour. These operators are based on computing the difference in pixel intensity between light and dark areas of the grayscale image. There are several basic edge detection (gradient) operators that can be applied: Sobel, Laplacian-of-Gaussian, Roberts, Prewitt, or a composite Canny algorithm. The last consists of several steps, including noise suppression and dynamic thresholding/binarization. See Canny, J., A Computational Approach To Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 8(6):679-698, 1986 (hereinafter Canny), the disclosure of which is hereby incorporated by reference in its entirety (however, where anything in the incorporated reference contradicts anything stated in the present application, the present application prevails).
[0105] However, all of these gradient-threshold methods can fall short when dealing with blurry and noisy images, such as an example of a blurry image of a liquid column shown in
[0106] The active contour tracking (also referred to as snakes) method can be used to address the disconnected or noisy (spurious) contour outcome. See Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models International Journal of Computer Vision, Vol. 1 (4): 321, 1988, and Chenyang Xu Snakes, Shapes, and Gradient Vector Flow IEEE Transactions on Image Processing, Vol. 7 (3), 1998, the disclosures of which are hereby incorporated by reference in their entirety (however, where anything in the incorporated references contradicts anything stated in the present application, the present application prevails). The active contour tracking method is a combination of the edge detector operators followed by the contour tracking of the result (binarized) that uses the contour curve properties such as its continuity and smoothness. The active contour method is based on the idea of using an energy functional, which represents a weighted combination of internal and external forces that are applied to the contour curve. The internal forces are governed by the physical properties of the contour (elasticity and bending) while the external forces come from the image properties (gradient). The problem is solved by finding an optimum (minimum) of the energy functional. The total energy functional is defined as the following definite integral over the whole range of the parameterized contour v(s)=v(x(s), y(s)), where s belongs to C[0,1]
E*.sub.v=.sub.CE.sub.ac(v(s))ds=.sub.CE.sub.in(v(s))+E.sub.ex(v(s))ds
[0107] where E.sub.in(v(s)) represents the internal energy of the active contour due to elasticity and bending, and E.sub.ex(v(s)) represents the external (image) forces that are applied to the contour. Internal energy is defined as the following two-term sum:
E.sub.in=(|v(s)|.sup.2+|v(s)|.sup.2)/2
[0108] The first-order term, which is controlled by the coefficient, adjusts the elasticity of the active contour. The second-order term, which is controlled by the coefficient, adjusts the stiffness of the active contour. In other words, the first part keeps the active contour short (discourages stretching), while the second part keeps it straight (discourages bending).
[0109] Given a grayscale image L(x,y), which represents a function of intensity in each (x,y)-position of the image, the image (external) force is chosen to lead an active contour toward the object edges and can be represented by two functionals (see Canny):
E.sub.ex.sup.(1)=|L(x,y)|.sup.2
E.sub.ex.sup.(2)=|[G.sub.(x,y)*L(x,y)]|.sup.2
where G.sub.(x, y) is a two-dimensional Gaussian function with standard deviation , is a gradient operator and * denotes the convolution operator. In other words, E.sub.ex.sup.(2) represents a gradient of the smoothed L(x, y) image.
[0110] In the case of the binary image B(x,y) the external forces can be formulated as the following:
E.sub.ex.sup.(1)=.sup.B(x,y)
E.sub.ex.sup.(2)=G.sub.(x,y)*.sup.B(x,y)
where .sup.B(x,y) represents an inverted binary image.
[0111] The parameter controls the smoothness of either the grayscale or the binary imagethe larger the parameter, the blurrier the images and their object edges are. The purpose of the parameter is to extend the search range for the optimization of the active contour.
[0112] The minimum of E*v can be found using the Euler-Lagrange equation:
v(s)v(s)E.sub.ex(v(s))=0
[0113] Let's denote F.sub.in=v(s)v(s) and F.sub.ex=E.sub.ex(v(s)), and then the latter equation can be re-formulated as a force balance equation:
F.sub.in+F.sub.ex=0
[0114] The F.sub.in term represents the internal force that discourages stretching and bending while the external force F.sub.ex pulls the active contour toward the desired image edges. Solving the above equation is accomplished with the gradient descent method, which requires converting the active contour v into a function of time v(s, t). Then the partial derivative of v(s, t) with respect to t can be applied to both sides of the Euler-Lagrange equation. After several iterations when the active contour (snake) has converged to a minimum, its derivative with respect to time becomes zero, and the equation is solved.
v(s,t)/t=v(s,t)v(s,t)E.sub.ex(v(s,t))
[0115] A numerical solution the above equation can be found by discretizing the parameters s and t and solving the equation numerically.
[0116] There is a modification of the active contour model where the curve v is defined implicitly as function of a new parameter r, i.e., s=(r). Then the problem can be reformulated in terms of a geodesic form (GAC), which states that the active contour optimization can be expressed as finding a curve of minimal (geodesic) length under the defined constraints. See Caselles, V., Kimmel, R., and Sapiro, G., Geodesic Active Contours, International Journal of Computer Vision, Vol. 22(1): 61-79, 1997, the disclosure of which is hereby incorporated by reference in its entirety (however, where anything in the incorporated reference contradicts anything stated in the present application, the present application prevails).
[0117] In another aspect, the image analysis techniques described below are concerned with detecting inclusions (in the form of bubbles) and scattered light that can affect the spectral measurements. Two parameters can be measured that are found to be useful in combination: a bubble presence score and scattered light intensity.
[0118] An isotropic gradient image of the grayscale component image is used for bubble inclusion detection (see
[0119] A simplified explanation of the ring detection filter is presented in
[0120] The following steps form the ring/bubble presence score calculation algorithm used for detecting any bubble in the column of the liquid sample using both the grayscale component image and the binary image of the light scatter component image:
[0121] 1. apply successive ring detection filtering to a grayscale isotropic gradient image obtained from the grayscale component image (
[0122] 2. accumulate the result into the cumulative image score.
[0123] 3. use the calculated light path rectangle (2610 obtained in step 24 above, shown in
[0124] 4. start from a minimal ring filter size (33) and increase it by 2 (the next is 55) and so on up to a predefined maximum filter diameter (1515, for instance) to cover all possible bubble sizes. While calculating, skip the scores that are below a certain threshold to avoid accumulating values due to noise. The noise threshold is calculated based on statistics of the background rectangular areas for the gradient image (step 810 in
[0125] 5. extract the same ROI portion (as that of grayscale component image shown in
[0126] 6. as shown in the workflow in
[0127] 7. check the bubble presence score and the scattered light intensity score. If both scores are greater that one, then report the defect (steps 812 and 814 in
[0128] To distinguish a bubble in the optical beam path from a bubble out of the optical beam path, the calculated optical beam center X.sub.C 1840, a theoretical maximum optical beam radius R.sub.O (in one embodiment, R.sub.O is equal to about 1/20th of the calculated capital diameter 1110), the column X.sub.L 2620 and X.sub.R 2630 (calculated optical path edges) are used. The calculation area for both the bubble detection filter and the scatter light score is limited to the following: left limit is max(X.sub.CR.sub.O, X.sub.L) and the right limit is min(X.sub.C+R.sub.O, X.sub.R). Limiting the calculation area enables performing the bubble and light scatter score measurement only in the part of the image that is known to distort the spectral measurement.
[0129] Images of a liquid column with bubbles, as shown in
[0130] Let's consider only circular shapes, although the Hough transform can handle elliptical shapes as well. The Hough transform applies to the binarized (thresholded) image of the gradient image or a binary image of the detected bubble edges. For instance, the Canny operator can be used for edge detection and for thresholding it to a binary image.
[0131] The standard circle equation has the following form:
(xa).sup.2+(yb).sup.2=r.sup.2,
where r is the radius of the circle and (a,b) is a coordinate of the center of the circle.
[0132] The Hough transform applies to the digital form of the circle equation, where all parameters are discrete: x and y are indices of a column and row of a matrix of binary (0 or 1) pixels, parameters a and b are also indices (relative positions) of circle centers and r spans through the possible radii of circles that fit into the image and bound to physical objects of interest (bubbles in this case). The radii usually start from a value greater than one, since an approximation of a radius-one circle on the digital image is too rough (it represents a square). Then, every binary edge (contour) pixel (x.sub.i, y.sub.i) can be transformed into an approximation of a circular cone in the 3D (a, b, r) parameter space. If all contour points lie on a circle, then all its correspondent cones will intersect at a single point (a.sub.i, b.sub.i, r.sub.i) corresponding to the parameters of the circle.
[0133] Since the space is digital, the cones that satisfy the digital form of the circle equation will not intersect at one pixel, but instead represent a small cluster of pixels with a Gaussian-like density distribution, whose center (most dense value) is the resulting (a.sub.i, b.sub.i, r.sub.i) circle triplet. In order to implement the distribution space, an additional voting (integer) value v is needed and the result of the transformation is a 3D matrix of voting values:
V=v(a,b,r),
where a spans though all image columns, b spans through all image rows and r spans through all possible radii of the objects of interest.
[0134] The final and most challenging part of the Hough algorithm is finding the points of local maxima in the resulting matrix V (parametric space). Usually, it requires applying an additional filter for the final matrix V. The resulting points of local maxima can be found by applying a threshold to the filtered matrix V, and they represent all possible circular objects. Because of the voting technique, the algorithm works well even for incomplete or noisy images.
[0135] For the image analysis summary report shown in Table 1, the following parameters with exemplary values are displayed to the operator of the instrument:
TABLE-US-00001 TABLE 1. Image analysis report Off-Center Offset (pixels) 5 Optical Path Diameter (pixels) 208 Light Scatter Intensity Score 21.9 Bubble Presence Score 36.0 Column Feature Area (pixels) 25554 Average ROI Pixel Intensity 148 Column Optical Path Length (pixels) 119
[0136] Off-center Offset: shows the column shape center offset, in pixels (see
[0137] Optical Path Diameter: the calculated column light/optical path (enclosed cylinder) diameter, in pixels. See step 24 above for details of its calculation.
[0138] Light Scatter Score: measured light scatter normalized intensity, in arbitrary fractional units; a value of 1 and greater usually indicates bubble/inclusions defects. The calculation is shown in step 4 of the ring/bubble presence score calculation algorithm above.
[0139] Bubble Presence Score: in arbitrary units, a value of more than 1 indicates the presence of bubbles. The bubble presence score is used in combination with light scatter score to identify bubbled (defective) columns. For the parameter calculation details, see step 4 of the ring/bubble presence score calculation algorithm above.
[0140] Column Feature Area: measured area of the calculated column shape in pixels.
[0141] Column Optical Path Length: measured height of the calculated light path rectangle in pixels, described in step 24 above.
[0142] Average ROI Pixel Intensity: average image intensity (between 0 and 255), that is useful for detecting underexposed or overexposed grayscale images and adjusting the binarization thresholds.
[0143] The liquid column analysis algorithm produces the following software completion codes:
[0144] Undefined: initial value that means either the analysis was interrupted or failed during initial ROI extraction stage (abnormal condition);
[0145] OK: normal column, expect good spectral reading (step 813 in
[0146] Defective column: check the off-center value, the bubble presence score and the light scatter score to identify the reason (step 814 in
[0147] Empty Compartment: no liquid column was detected (step 808 in
[0148] Broken Column: no connection between the interface surfaces (step 808 in
[0149] Column Is Too Short: too short a distance between the interface surface (abnormal condition);
[0150] Poor Background: image background quality is too poor for the analysis (abnormal condition) (step 803 in
[0151] While the invention has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications will be appreciated by those skilled in the art to adapt a particular instrument, situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims.