Method for Correcting A Synthetic Aperture Radar Antenna Beam Image
20230135348 · 2023-05-04
Inventors
- Cameron Musgrove (Bixby, OK, US)
- Griffin Gothard (Satellite Beach, FL, US)
- Daniel Faircloth (Huntsville, AL, US)
Cpc classification
G01S13/90
PHYSICS
International classification
G01S13/90
PHYSICS
Abstract
A method for correcting a synthetic aperture radar (SAR) antenna beam image comprising: collecting SAR data, forming an uncorrected image, isolating a pixel value from the uncorrected image, performing an inverse image formation on the isolated pixel value to convert the isolated pixel value into a phase history, calculating actual isolated pixel value location in the uncorrected image, computing range loss, antenna beam, and phase corrections for the isolated pixel value, interpolating range loss corrections, antenna beam pattern corrections, and phase corrections in the phase history, applying the interpolated corrections to the isolated pixel value phase history thereby forming a corrected phase history, converting the corrected phase history back into a corrected image, replacing the corresponding uncorrected pixel value in the uncorrected image with the corrected isolated pixel value, and repeating this process for all uncorrected pixel values thereby providing a corrected SAR image.
Claims
1. A computer and software implemented method for correcting a synthetic aperture radar (SAR) antenna beam image, comprising: a) collecting SAR image data, including phase history and wave number domain, from an object; b) forming an uncorrected image I.sub.uc of the object from the SAR collected data using an invertible image formation algorithm; c) isolating a pixel value I.sup.//.sub.uc(x,y) from the uncorrected image I.sub.uc, then inserting the isolated pixel value I.sub.uc(x,y) into an image with all pixel values having a zero value except the isolated pixel value I.sub.uc (x,y), thereby creating image I.sup.//.sub.uc, performing an inverse image formation on the image I.sup.//.sub.uc to create a phase history X.sup.//.sub.v, that represents only the isolated pixel value I.sub.uc(x,y) from the image I.sup.//.sub.uc; d) detecting the location of the isolated pixel value I.sub.uc (x,y) relative to a reference point and calculating an actual isolated pixel value location S.sub.x.sup./S.sub.y.sup./, based on detected isolated pixel value location in the uncorrected image I.sub.uc; e) computing range loss corrections for the isolated pixel value I.sup.//.sub.uc(x,y), based on a range to actual pixel location; f) computing antenna beam pattern corrections for the isolated pixel value I.sup.//.sub.uc (x,y) based on frequency and angle to a measurement location at every SAR sampling position; g) calculating phase corrections for the isolated pixel value I.sup.//.sub.uc (x,y) using an image formation algorithm; h) interpolating range loss corrections, antenna beam pattern corrections, and phase corrections into an interpolated phase history X.sup.//.sub.corr according to the image formation algorithm of Step g); i) applying the interpolated phase history, X.sup.//.sub.corr, to the phase history X.sup.//.sub.v forming a corrected phase history X.sup.///.sub.v representing I.sup.//.sub.uc; j) reversing step c) by transforming the corrected phase history X.sup.///.sub.v, into a corrected image I.sup.//.sub.c; k) replacing the corresponding uncorrected pixel value I.sub.uc(x,y) in the uncorrected image I.sub.uc with the corrected isolated pixel value I.sup.//.sub.c(x,y); and l) repeating steps c) through l) until all uncorrected pixel values in the uncorrected image I.sub.uc are replaced with corrected pixel values from image I.sup.//.sub.c, thereby providing a corrected SAR image of the object.
2. The computer and software implemented method of claim 1, further comprising, in step b), forming the uncorrected image I.sub.uc using a forward transformation from the phase history of the SAR collected data.
3. The computer and software implemented method of claim 1, wherein, in step c), the step of converting the isolated pixel value I″.sub.uc (x,y) into a phase history X″.sub.v comprises reverse transforming the pixel image I″.sub.uc into the phase history X″.sub.v.
4. The computer and software implemented method of claim 1, further comprising, in step d), calculating the pixel location of the isolated pixel value I.sup.//.sub.uc as the number of pixels the isolated pixel value I.sup.//.sub.uc is distant from the reference point, multiplying the number of pixels by pixel spacing, and estimating the actual pixel location (s.sub.x′ s.sub.y′) of the isolated pixel value I.sup.//.sub.uc.
5. The computer and software implemented method of claim 1, further comprising, in step e), calculating range loss using a radar range equation.
6. The computer and software implemented method of claim 1, further comprising, in step f), calculating an antenna beam factor correction Amp.sub.fac for the isolated pixel value I.sup.//.sub.uc (x,y) and applying the antenna beam factor correction to the isolated pixel value I.sup.//.sub.uc (x,y) on a per-pulse basis.
7. The computer and software implemented method of claim 1, further comprising, in step g), calculating phase corrections for the isolated pixel value I.sup.//.sub.uc (x,y) in the phase history X.sub.v″ based on the SAR position as the SAR collected data from the object.
8. The computer and software implemented method of claim 1, further comprising, in step i), correcting the amplitude of the corrected image I.sup.//.sub.c″.
9. A computer and software implemented method for correcting a synthetic aperture radar (SAR) antenna beam image, comprising: a) collecting SAR image data, including phase history and wave number domain, from an object; b) forming an uncorrected image I.sub.uc of the object from the SAR collected data using an invertible image formation algorithm, forming the uncorrected image using a forward transformation from the phase history of the SAR collected data; c) isolating a pixel value I.sup.//.sub.uc (x,y) from the uncorrected image I.sub.nc, then inserting the isolated pixel value I.sub.uc(x,y) into an image with all pixel values having a zero value except the isolated pixel value I.sub.uc(x,y), thereby creating image I.sup.//.sub.uc, performing an inverse image formation on the image I.sup.//.sub.uc to create a phase history X.sup.//.sub.v, that represents only the isolated pixel value I.sub.uc(x,y) from the image I.sup.//.sub.uc; d) detecting the location of the isolated pixel value I.sub.uc(x,y) relative to a reference point and calculating an actual isolated pixel value location S.sub.x.sup./S.sub.y.sup./, based on detected isolated pixel value location in the uncorrected image I.sub.uc, calculating the pixel location of the isolated pixel value I.sup.//.sub.uc (x,y) as the number of pixels the isolated pixel value I.sup.//.sub.uc (x,y) is distant from the reference point, multiplying the number of pixels by pixel spacing, and estimating the actual location (s.sub.x′ s.sub.y′) of the isolated pixel value I.sup.//.sub.uc (x,y); e) computing range loss corrections for the isolated pixel value I.sup.//.sub.uc (x,y), based on a range to actual pixel location; f) computing antenna beam pattern corrections for the isolated pixel value I.sup.//.sub.uc (x,y) based on frequency and angle to a measurement location at every SAR sampling position; g) calculating phase corrections for the isolated pixel value I.sup.//.sub.uc (x,y) using an image formation algorithm; h) interpolating range loss corrections, antenna beam pattern corrections, and phase corrections in the phase history X.sup.//.sub.corr according to the image formation algorithm of Step g); i) applying the interpolated phase history X.sup.//.sub.corr to the phase history X.sup.//.sub.v forming a corrected phase history X.sup.///.sub.v; j) reversing step c) by transforming the corrected phase history X.sup.///.sub.v into a corrected image I.sup.//.sub.c; k) replacing the corresponding uncorrected pixel value I.sub.uc(x,y) in the uncorrected image I.sub.uc with the corrected isolated pixel value I.sup.//.sub.c(x,y); and l) repeating steps c) through l) until all uncorrected pixel values in the uncorrected image I.sub.uc are replaced with corrected pixel values from image I.sup.//.sub.c, thereby providing a corrected SAR image of the object.
10. The computer and software implemented method of claim 9, further comprising, in step e), calculating range loss using a radar range equation.
11. The computer and software implemented method of claim 9, further comprising, in step f), calculating an antenna beam pattern correction Amp.sub.fac for the isolated pixel value I.sup.//.sub.uc (x,y) and applying the antenna beam pattern correction to the isolated pixel value I.sup.//.sub.uc (x,y) on a per-pulse basis.
12. The computer and software implemented method of claim 9, further comprising, in step g), calculating phase corrections for the isolated pixel value I.sup.//.sub.uc (x,y) in the phase history X.sub.v″ based on the SAR position as the SAR collected data from the object.
13. The computer and software implemented method of claim 9, further comprising, in step i), correcting the amplitude of the corrected image I.sup.//.sub.uc.
14. A computer and software implemented method for correcting a synthetic aperture radar (SAR) antenna beam image, comprising: a) collecting SAR image data, including phase history and wave number domain, from an object; b) forming an uncorrected image I.sub.uc of the object from the SAR collected data using an invertible image formation algorithm, forming the uncorrected image using a forward transformation from the phase history of the SAR collected data; c) isolating a pixel value I.sup.//.sub.uc (x,y) from the uncorrected image I.sub.uc, then inserting the isolated pixel value I.sub.uc(x,y) into an image with all pixel values having a zero value except the isolated pixel value I.sub.uc(x,y), thereby creating image I.sup.//.sub.uc, performing an inverse image formation on the image I.sup.//.sub.uc to create a phase history X.sup.//.sub.v that represents only the isolated pixel value I.sub.uc(x,y) from the image I.sup.//.sub.uc; d) detecting the location of the isolated pixel value I.sup.//.sub.uc (x,y) relative to a reference point and calculating actual isolated pixel value location S.sub.x.sup./S.sub.y.sup./, based on detected isolated pixel value location in the uncorrected image I.sub.uc, calculating the pixel location of the isolated pixel value I.sup.//.sub.uc (x,y) as the number of pixels the isolated pixel value I.sup.//.sub.uc (x,y) is distant from the reference point, multiplying the number of pixels by pixel spacing, and estimating the actual location (s.sub.x′ s.sub.y′) of the isolated pixel value I.sup.//.sub.uc (x,y); e) computing range lost corrections for the isolated pixel value I.sup.//.sub.uc (x,y), based on a range to actual pixel location, and calculating range loss based on a radar range equation; f) computing antenna beam pattern corrections for the isolated pixel value I.sup.//.sub.uc (x,y) based on frequency and angle to a measurement location at every SAR sampling position, calculating a range factor correction Amp.sub.fac for the isolated pixel, and applying the range factor correction to the isolated pixel on a per-pulse basis; g) calculating phase corrections for the isolated pixel value I.sup.//.sub.uc (x,y) using an image formation algorithm; h) interpolating range loss corrections, antenna beam pattern corrections, and phase corrections into an interpolated phase history X.sup.//.sub.corr according to the image formation algorithm of Step g) and calculating phase corrections for the isolated pixel value I.sup.//.sub.uc (x,y) in the phase history domain based on the SAR position as the SAR collected data from the object; i) applying the interpolated phase history, X.sup.//.sub.corr, to the phase history X.sup.//.sub.v forming an interpolated phase history X.sup.///.sub.v representing I.sup.//.sub.uc (x,y); j) reversing step c) by converting the corrected phase history X.sup.///.sub.v into a corrected image I.sup.//.sub.c; k) replacing the corresponding uncorrected pixel value I.sub.uc(x,y) in the uncorrected image I.sub.uc with the corrected isolated pixel value I.sup.//.sub.c (x,y); and l) repeating steps c) through l) until all uncorrected pixel values in the uncorrected image I.sub.uc are replaced with corrected pixel values from image I.sup.//.sub.c, thereby providing a corrected SAR image of the object.
15. The computer and software implemented method of claim 14, further comprising, in step i), correcting the amplitude of the corrected image I.sup.//.sub.c.
16. A computer and software implemented method for correcting a synthetic aperture radar (SAR) antenna beam image, comprising: a) collecting SAR image data, including phase history and wave number domain, from an object; b) forming an uncorrected image I.sub.uc of the object from the SAR collected data using an invertible image formation algorithm; c) isolating a pixel value I.sup.//.sub.uc (x,y) from the uncorrected image I.sub.uc, then inserting the isolated pixel value I.sub.uc(x,y) into an image with all pixel values having a zero value except the isolated pixel value I.sub.uc(x,y), thereby creating image I.sup.//.sub.uc, performing an inverse image formation on the image I.sup.//.sub.uc to create a phase history X.sup.//.sub.v that represents only the isolated pixel value I.sub.uc(x,y) from the image I.sup.//.sub.uc; d) detecting the location of the isolated pixel value I.sub.uc(x,y) relative to a reference point and calculating an actual isolated pixel value location S.sub.x.sup./S.sub.y.sup./, based on detected isolated pixel value location in the uncorrected image I.sub.uc; e) computing antenna beam pattern corrections for the isolated pixel value I.sup.//.sub.uc (x,y) based on frequency and angle to a measurement location at every SAR sampling position; f) interpolating antenna beam pattern corrections into an interpolated phase history X.sup.//.sub.corr using an image formation algorithm; g) applying the interpolated phase history, X.sup.//.sub.corr, to the phase history X.sup.//.sub.v forming a corrected phase history X.sup.///.sub.v representing I.sup.//.sub.uc; h) reversing step c) by transforming the corrected phase history X.sup.///.sub.v into a corrected image I.sup.//.sub.c; i) replacing the corresponding uncorrected pixel value I.sub.uc(x,y) in the uncorrected image I.sub.uc with the corrected isolated pixel value I.sup.//.sub.c(x,y); and j) repeating steps c) through i) until all uncorrected pixel values in the uncorrected image I.sub.uc are replaced with corrected pixel values from image I.sup.//.sub.c, thereby providing a corrected SAR image of the object.
17. The computer and software implemented method of claim 16, further comprising, in step b), forming the uncorrected image I.sub.uc using a forward transformation from the phase history of the SAR collected data.
18. The computer and software implemented method of claim 17, wherein, in step c), the step of performing an inverse image formation on the image I″.sub.uc to create a phase history X″.sub.v comprises reverse transforming the pixel image I″.sub.uc into the phase history X″.sub.v.
19. The computer and software implemented method of claim 18, further comprising, in step d), calculating the pixel location of the isolated pixel value I.sup.//.sub.uc as the number of pixels the isolated pixel value I.sup.//.sub.uc is distant from the reference point, multiplying the number of pixels by pixel spacing, and estimating the actual pixel location (s.sub.x′ s.sub.y′) of the isolated pixel value I.sup.//.sub.uc.
20. The computer and software implemented method of claim 19, further comprising, in step g), correcting the amplitude of the corrected image I.sup.//.sub.c.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0011]
[0012]
[0013]
[0014]
[0015]
DETAILED DESCRIPTION OF THE INVENTION
[0016] While the following description details the preferred embodiments of the present invention, it is to be understood that the invention is not limited in its application to the details of arrangement of the parts as described and shown in the figures disclosed herein, since the invention is capable of other embodiments and of being practiced in various ways.
[0017] The present invention provides a method for implementing a frequency and spatially variant beam pattern correction for an instrumentation radar in a strip map SAR collection mode, wherein the correction relies solely upon the measured antenna patterns and not upon any particular image content to correct the image. The steps of the method are illustrated in the flow chart of
[0018] Methods of spotlight data collection modes are well known in the art. Spotlight mode collects data at constant angle increments relative to a defined point in space and steers the antenna at that defined point in space. Strip map data is converted to spotlight format by methods known in the art where each collected pulse is shifted in time such that it is as if the radar flew a circle around a defined point. The defined point in space is defined for ease to be the center of the final image and can correspond to the center of a collection rail SAR. The specific method to make this correction depends upon the waveform used to collect SAR data.
[0019] For linear frequency modulated waveforms (only), the method includes defining a value, R_line, of the distance between the center of the image/target area and the position of the radar at the center of the collection rail SAR. The method further includes defining a vector, R_point, of the distance between the center of the image/target area to the location of the radar for each pulse collected. The range difference, defined as delR=R_line−R_point is then calculated and the following correction is applied on a pulse-by-pulse basis as the range difference changes for each pulse:
where X.sub.v is the stripmap formatted phase history data and X.sub.v′ is the spotlight formatted phase history data.
[0020] An initial uncorrected image I.sub.uc of the object is formed using a polar format algorithm (or another invertible image formation algorithm) (Step 5). A beam correction is applied using any invertible image formation algorithm. The polar format algorithm is used as an example. Invertible means that the image and collected data (i.e. phase history) can be transformed back and forth between the image and data domains. The transformation between image and phase history is a forward or reverse transformation.
[0021] The forward transformation converts phase history to an image. The reverse transformation converts the image to a phase history. An expression to describe the forward transformation from the phase history, X′.sub.v, to an uncorrected image, is I.sub.uc={X′.sub.v}.
[0022] Steps 6-14 are performed for each uncorrected pixel value I.sub.uc (x,y) in the initial uncorrected image I.sub.uc. A new image, I.sup.//.sub.uc, is created the same size as I.sub.uc that contains all zero values except for the isolated pixel from the uncorrected image I.sub.uc an inverse image formation is then performed, converting the isolated pixel value I.sup.//.sub.uc, into a phase history X.sup.//.sub.v. (Step 6). This step 6 can be implemented in a loop for every image pixel in I.sub.uc. Other implementations include processing all image pixels at the same time, in parallel. The steps for each pixel in the uncorrected image are: selecting a pixel, zero valuing the remainder of the image pixels, and creating a single pixel value image, I″.sub.uc, of the same size as the uncorrected image, I.sub.uc. Then reverse transform image I″.sub.uc, into a phase history, X″.sub.v, that represents only that pixel location:
X″.sub.v=.sup.−1{I″.sub.uc}.
[0023] The actual location for each pixel is calculated in the uncorrected image based on detected pixel location (Step 7). The pixel location is relative to a reference point. The reference point can be defined anywhere. It is convenient to define the center of the image as the reference point. The pixel location is calculated as the number of pixels from the reference point, then multiplied by the pixel spacing. The pixel location relative to the reference point is now known.
[0024] In many image formation algorithms, and depending on how the data are collected, the image pixel location (S.sub.x, S.sub.y) is distorted from its true spatial location due to the radar's wavefront curvature (See Doerry, A W, “Wavefront curvature limitations and compensation to polar format processing for synthetic aperture radar images.,” SAND2007-0046, 902879, Jan. 2006. doi: 10.2172/902879, which is incorporated herein by reference). Step 7 estimates the actual spatial position (S.sub.x′ S.sub.y′) of that pixel location. Specifically, for the polar format algorithm at a close-range radar system, the true pixel location is a linear transform that requires calculating Δr and α. The distance from the radar data collection locations to the reference point (in this case the center of the image) is R.sub.point,
where R.sub.point(midAperture) is the closest distance, the radar approaches the reference point (this calculation assumes that the image is being formed at this location and orientation.):
the actual pixel location is then calculated (s′.sub.x,s′.sub.y) as:
s′.sub.y=s.sub.y−Δr
s′.sub.x=tan(α)*(R.sub.point(midAperture)+s.sub.y).
[0025] The range loss corrections are computed based on range to actual pixel location (Step 8). Range loss is calculated using a radar range equation, based on the Friis transmission equation (https://en.wikipedia.org/wiki/Friis_transmission_equation). In the radar range equation, range loss is a term in the dominator, to the fourth power. The radar collects data based on the range to scene center. When the scene size is large relative to the distance to the radar, the R.sup.4 varies significantly over the entire scene. The assumed range used thus far in processing needs to be removed before the actual range to this pixel location is used to adjust the received power level. Fundamentally, targets that are closer in range return more power than the radar expects. This correction will adjust this difference.
[0026] The range loss correction is made on a pulse-by-pulse basis. For each position of the radar collected data, the distance to the reference point is calculated as Rng.sub.ref. Since the distance between the actual pixel location (s′.sub.x,s′.sub.y) and each position the radar collected data is calculated as Rng.sub.target, the correction factor for range loss can be calculated as
[0027] Antenna beam pattern corrections are computed as a matrix based on frequency and angle to a measurement location at every sample position (Step 9). Antenna beam pattern corrections are computed as a matrix based on frequency and angle between target true location and all radar measurement locations. Some radars use the same antenna for both transmit and receive. One method to calculate this angle is to first define a normal vector that represents the relative position to the radar measurement locations and pointing direction of the transmit and receive antenna(s). The radar measurement positions are then used with the relative antenna location to calculate an absolute location for each transmit and receive antenna at all radar measurement locations. Another vector from the transmit and receive antenna to the actual pixel location is calculated. The angle between these two vectors is then calculated. A dot product is one way to compute the angle between the two vectors. The computed angle at each location is then used to interpolate the antenna gain from a stored set of antenna pattern measurement data; this antenna gain is G.sub.Tx and G.sub.Rx.
[0028] Antenna pattern measurement data will have two dimensions to express antenna gain: angle and frequency. Antenna pattern angle and frequency data can be measured independently of the radar system to characterize the antennas. The frequency points from the antenna pattern measurement data can be interpolated to match the same frequency support points and span as the phase history data.
[0029] An antenna gain factor correction, Amp.sub.fac, is calculated and applied on a per-pulse basis. Each pulse has a unique angle from the antenna to the true pixel location (calculated as described in the above paragraph). The collected data may or may not be calibrated to a specific RCS value at a specific location. In the case the data has been calibrated, the calibration needs to be removed before applying this antenna correction. The antenna gain is calculated in the same way as it is calculated for the true pixel location, except the reference point is used instead of the true pixel location, which is G.sub.refTx and G.sub.refRx.
[0030] Phase corrections are calculated based on a suitable image formation algorithm known in the art (Step 10). For an example, see Doerry, “Wavefront Curvature Limitations and Compensation to Polar Format Processing for Synthetic Aperture Radar Images”, page 12, equations 114-124. The phase corrections are expressed as Phase.sub.fac.
[0031] Range loss corrections, antenna beam pattern corrections, and phase corrections are interpolated in the wave number domain according to the image formation method (Step 11). The phase corrections are calculated in the phase history domain based on the radar position as data is collected. The processes used in the image formation process are applied to convert the correction data to the same state as the reverse transformed image. Specifically, for use with a polar format algorithm, this process is a data resampling in the slow-time dimension. In the present implementation, correction data is created on resampled grid coordinates in the fast-time dimension (of the phase history). This is accomplished by resampling the antenna pattern data in the frequency dimension to correspond to the frequency points used in polar format to provide an interpolated phase history X.sup.//.sub.corr:
[0032] The interpolated corrections from Step 11 are applied to the phase history of the single pixel image I.sup.//.sub.uc(Step 12). This step 12 is a multiplication of each element of the phase history, X″.sub.v, with the same corresponding frequency and angle element of the interpolated phase history, X″.sub.corr, of the single pixel:
X.sub.v′″=X.sub.v″o X.sub.corr″.
[0033] The interpolated wavenumber domain data is converted back into an image (Step 13). This step 13 is a forward transformation, reversing Step 6. It creates a single pixel image with a corrected amplitude.
I.sub.c″{X.sub.v′″}
The pixel value is replaced in the uncorrected image with the corresponding pixel value, I.sup.//.sub.c(x,y), in the image from Step 13. Then proceeding to the next pixel in the uncorrected image, steps 6-14 are repeated until all pixels are corrected (Step 14).
[0034] In an alternate embodiment, corrections are made just to the antenna beam pattern, which is particularly useful for long range SAR systems where the range differences between near and far edges of the SAR image do not have a large variance in RCS due to relative change in distance.
[0035] A rail SAR system was computer simulated to generate SAR data from a set of objects to test the precision and accuracy of the method of the present invention.
Rail SAR Computer Simulations
[0036] A test case of five 1.5-inch diameter metal spheres was created using V-LOX electromagnetic software (IERUS Technologies, Inc., Huntsville, Ala., www.ierustech.com. V-Lox is a computational electromagnetics prediction software product based on method of moments and leverages advanced matrix compression and GPU acceleration to output high quality solutions quickly.) The sphere targets were positioned at a near corner 16, near center, center 15, far center, and far corner 17 of a 5 ft-by-5 ft area centered at a point 10 ft away from the radar (center location 15 in
[0037]
[0038] The computer simulations and SAR measurements show that the radar antenna beam pattern correction method of this invention improves the accuracy of measured RCS values by bringing the measured RCS values closer to true RCS values.
[0039]
[0040] The computing device 30 additionally includes a data store 34 that is accessible by the processor 31 by way of the system bus 33. The data store 34 may include executable instructions, operating parameters, etc. The computing device 30 also includes an input interface 35 that allows external devices to communicate with the computing device 30. For instance, the input interface 35 may be used to receive instructions from an external computer device, from a user, etc. The computing device 30 also includes an output interface 36 that interfaces the computing device 30 with one or more external devices. For example, the computing device 30 may display text, images, etc. by way of the output interface 36.
[0041] Additionally, while illustrated as a single system, it is to be understood that the computing device 30 may be a distributed system. Thus, for example, several devices may be in communication by way of a network connection and may collectively perform tasks described as being performed by the computing device 30.
[0042] The foregoing description illustrates and describes the disclosure. Additionally, the disclosure shows and describes only the preferred embodiments but, it is to be understood that the preferred embodiments are capable of being formed in various other combinations, modifications, and environments and are capable of changes or modifications within the scope of the invention concepts as expressed herein, commensurate with the above teachings and/or the skill or knowledge of the relevant art. The embodiments described herein above are further intended to explain the best modes known by applicant and to enable others skilled in the art to utilize the disclosure in such, or other, embodiments and with the various modifications required by the particular applications or uses thereof. Accordingly, the description is not intended to limit the invention to the form disclosed herein. Also, it is intended that the appended claims be construed to include alternative embodiments. It will be further understood that various changes in the details, materials, and arrangements of the parts which have been described and illustrated above in order to explain the nature of this invention may be made by those skilled in the art without departing from the principle and scope of the invention as recited in the following claims.