Modular imaging system
10338214 ยท 2019-07-02
Assignee
Inventors
- Michael Ellenbogen (Wayland, MA)
- Michael Litchfield (Winchester, MA, US)
- Alec Rose (West Hartford, CT)
- Peter Conway (Waltham, MA, US)
Cpc classification
G01V8/005
PHYSICS
G01S13/87
PHYSICS
H01Q21/293
ELECTRICITY
G01S13/887
PHYSICS
G01S13/86
PHYSICS
International classification
G01S13/88
PHYSICS
H01Q21/29
ELECTRICITY
G01S13/86
PHYSICS
Abstract
A modular imaging system includes an antenna panels, a sensor, and at least one data processor. The antenna panels include an array of antenna elements including at least two antenna elements separated by a spacing more than a half wavelength. The plurality of antenna panels are configurable to be spatially arranged and oriented with respect to one another to measure radar returns of an observation domain for a target. The sensor has a field of view overlapping the observation domain and for measuring an image. The at least one data processor forms part of at least one computing system and is adapted to receive data characterizing the optical image and the radar returns, determine a spatial location of the target, and construct a radar return image of the target using a sparsity constraint determined from the spatial location of the target. Related apparatus, systems, techniques, and articles are also described.
Claims
1. A system comprising: a plurality of antenna panels comprising an array of antenna elements including at least two antenna elements separated by a spacing more than a half wavelength, the plurality of antenna panels are configurable to be spatially arranged and oriented with respect to one another to measure radar returns of an observation volume for a target; an optical sensor having a field of view overlapping the observation volume and for measuring an optical image; and at least one data processor forming part of at least one computing system and adapted to receive data characterizing the optical image and the radar returns, determine a spatial location of the target using the data characterizing the optical image, determine a sparsity constraint using the determined spatial location of the target, and construct a radar return image of the target using the determined sparsity constraint and the radar returns, wherein the spatial location of the target defines empty voxels and voxels in which the target is present.
2. The system of claim 1, further comprising: a base station to at least receive the radar returns from the plurality of antenna panels and generate the data characterizing the radar returns as in-phase and quadrature data; a display; wherein the at least one data processor forming part of at least one computing system is further adapted to detect for a presence of threat objects in the radar return image; and wherein the at least one data processor forming part of the at least one computing system is further adapted to render, in the display, characterizations of the detected threat objects.
3. The system of claim 1, wherein the plurality of antenna panels are configurable to be spatially arranged and oriented with respect to one another based on an intended application.
4. The system of claim 1, the plurality of antenna panels including: four approach panels arranged vertically with respect to one another and coplanar in a first plane; four rearward panels arranged vertically with respect to one another and coplanar in a second plane; wherein an angle between the first plane and the second plane is between 10 degrees and 170 degrees.
5. The system of claim 4, wherein the angle between the first plane and the second plane is between 60 degrees and 120 degrees.
6. The system of claim 4, wherein the four approach panels span a vertical distance less than 160 centimeters and each panel's vertical dimension is between 20 and 30 centimeters.
7. The system of claim 5, the plurality of antenna panels further including: a second set of four approach panels; and a second set of four rearward panels, with a pass-through region between the four approach panels and the second set of approach panels.
8. The system of claim 1, further comprising: a housing having a first hinge to fold the housing, the housing coupled to the plurality of antenna panels, the plurality of antenna panels including four panels coplanar in a first plane.
9. The system of claim 8, wherein the plurality of antennas further include a fifth panel coupled to the housing with a second hinge.
10. The system of claim 8, wherein the system is collapsible by folding the housing using the first hinge so that each panel in the plurality of antenna panels is enclosed by the housing.
11. The system of claim 8, wherein, when the housing is in a closed position, a largest dimension of the housing is less than 50 centimeters, and a second dimension of the housing is between 27 centimeters and 40 centimeters.
12. The system of claim 1, the plurality of antenna panels including at least nine coplanar panels in a row and column arrangement, each panel separated from a neighboring panel by between 4 and 8 centimeters.
13. The system of claim 1, the plurality of antenna panels including: a first set of two approach panels arranged vertically with respect to one another and coplanar in a first plane; a second set of two approach panels arranged vertically with respect to one another and coplanar in a second plane; wherein an angle between the first plane and the second plane is between 100 and 170 degrees.
14. The system of claim 13, wherein the two approach panels in the first set are separated vertically by between 30 and 60 centimeters.
15. A method comprising: receiving, by at least one data processor, data characterizing radar returns measured by a plurality of antenna panels comprising an array of antenna elements including at least two antenna elements separated by a spacing more than a half wavelength, the plurality of antenna panels are configurable to be spatially arranged and oriented with respect to one another to measure radar returns of an observation volume for a target; receiving, by at least one data processor, data characterizing an optical image containing the observation volume and measured by an optical sensor having a field of view overlapping with the observation volume; determining a spatial location of the target using the data characterizing the optical image; determining a sparsity constraint using the determined spatial location of the target; and constructing a radar return image of the target using the determined sparsity constraint and the radar returns, wherein the spatial location of the target defines empty voxels and voxels in which the target is present.
16. The method of claim 15, further comprising: generating, by a base station receiving the radar returns from the plurality of antenna panels, the data characterizing the radar returns as in-phase and quadrature data; automatically detecting for a presence of threat objects in the radar return image; and rendering, in a display, characterizations of the detected threat objects.
17. The method of claim 15, wherein the plurality of antenna panels are configurable to be spatially arranged vertically adjacent to one another to inspect targets moving through the observation volume.
18. A non-transitory computer program product which, when executed by at least one data processor forming part of at least one computer, result in operations comprising: receiving, by at least one data processor, data characterizing radar returns measured by a plurality of antenna panels comprising an array of antenna elements including at least two antenna elements separated by a spacing more than a half wavelength, the plurality of antenna panels are configurable to be spatially arranged and oriented with respect to one another to measure radar returns of an observation volume for a target; receiving, by at least one data processor, data characterizing an optical image containing the observation volume and measured by an optical sensor having a field of view overlapping with the observation volume; determining a spatial location of the target using the data characterizing the optical image; determining a sparsity constraint using the determined spatial location of the target; and constructing a radar return image of the target using the determined sparsity constraint and the radar returns, wherein the spatial location of the target defines empty voxels and voxels in which the target is present.
19. The non-transitory computer program product of claim 18, the operations further comprising: generating, by a base station receiving the radar returns from the plurality of antenna panels, the data characterizing the radar returns as in-phase and quadrature data; automatically detecting for a presence of threat objects in the radar return image; and rendering, in a display, characterizations of the detected threat objects.
20. The system of claim 1, wherein constructing the radar return image includes solving an underdetermined linear system using compressed sensing.
21. The system of claim 1, the at least one data processor further adapted to determine three dimensional surfaces within the observation volume.
Description
DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17) Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION
(18) The current subject matter can include an RF imaging system including multiple antenna panels that can be modularly assembled, scaled, and independently arranged based on an intended application. Moreover, the RF imaging system can include an optical sensor to provide for compressed sensing to reduce the amount of data acquired and processed thereby reducing the RF imaging system's size and cost requirements.
(19) The antenna panels can be connected to a base station for data processing, which can be connected to a computing device for automatic image generation, threat detection, and viewing of images. The system can detect threats, such as knives, guns, explosives, and the like, carried on an individual. Each antenna panel can act as a building block of a whole system such that controlling the number of antenna panels and their orientation with respect to an observational domain allows the system to be configured for different applications.
(20) In addition, each antenna panel can include a sparse array of antenna elements enabling images to be acquired using compressive sensing thereby reducing the amount of data acquired, which, in turn, reduces the amount of data that must be processed. A sensor, such as a video camera, can be included to determine a target's spatial location for enforcing compressed sensing sparsity constraints.
(21)
(22) Each antenna panel 105.sub.i includes antenna elements that are sparsely distributed across the face of the antenna panel 105.sub.i to enable compressed sensing of observation domain 107. Antenna elements can be considered sparsely distributed if a spacing of more than a half-wavelength (of an operating frequency) separates the elements.
(23) In the example layout of an antenna panel 105.sub.i illustrated in
(24) In addition, antenna panels 105.sub.i can be arranged in various configurations and orientations with respect to one another to illuminate an observational domain (OD) 107. Moreover, the system is capable of having an expandable number of antenna panels 105.sub.i. The number, configuration, and orientations of the antenna panels 105.sub.i can define customizable ODs based on an intended application. In other words, the modular imaging system 100 can support multiple concepts of operation. For example,
(25) Additional concepts of operation are possible. For example,
(26)
(27)
(28) In some implementations, the angle of the first plane a as measured to a transverse axis 815 can vary between 10 and 85 degrees. In an example implementation, the angle is 50 degrees. An angle of the second plane can similarly be between 10 and 85 degrees. Thus, an angle between the first plane and the second plane can be between 10 degrees and 170 degrees, 60 and 120 degrees, and in the example implementation, the angle between the first plane and the second plane is 110 degrees. By controlling the angle of the antenna panels, the observation domain can be controlled.
(29) Inter-panel spacing 805 in the vertical direction can vary and in an example implementation is 16.8 cm. Thus, with the example panels described above (in which each panel's vertical dimension is between 20 and 30 centimeters) and in the security checkpoint panel configuration, the four approach panels span a vertical distance less than 160 centimeters. Inter-panel offset 810 can be 2.9 cm, which can allow for improved resolution of the observational domain.
(30)
(31)
(32)
(33) Inter-panel spacing 1120 and 1125 can vary and in an example implementation can be 1.2 cm and 8.9 cm, respectively. Thus, with the example panels described above (in which each panel's vertical dimension is between 20 and 30 centimeters) and in the field access portable arrangement 710, the four panels span a first distance (e.g., height or width) that is less than 50 centimeters, and span a second distance (e.g., width or height, respectively) of the housing 1105 of less than 40 centimeters.
(34) The housing 1105 can fold into a brief-case-like shape for portability. For example, in an example implementation, when the housing 1105 is closed, a largest dimension (e.g., length) of the housing can be less than 50 centimeters, and a second dimension (e.g., height) of the housing is between 27 centimeters and 40 centimeters.
(35) As illustrated in
(36)
(37)
(38)
(39) In some implementations, the angle of the first plane a as measured to a transverse axis 1405 can vary between 10 and 85 degrees. In an example implementation, the angle is 22 degrees. An angle of the second plane can similarly be between 10 and 85 degrees. Thus, an angle between the first plane and the second plane can be between 100 degrees and 170 degrees, and in the example implementation, the angle between the first plane and the second plane is 134 degrees. By controlling the angle of the antenna panels, the observation domain can be controlled.
(40) Inter-panel spacing 1410 in the vertical direction can vary and in an example implementation is 45.8 cm. Thus, with the example panels described above (in which each panel's vertical dimension is between 20 and 30 centimeters) and in the access and chokepoint panel configuration, the two approach panels in a given set are separated vertically by between 30 and 60 centimeters and span a vertical distance less than 120 centimeters.
(41) Referring again to
(42) Modular imaging system 100 can include RF base 160 capable of generating an RF local oscillator reference signal that can be distributed to antenna panels 105.sub.i. In some implementations, the reference signal can establish a fully phase coherent imaging system across all receive-transmit antenna pairs and across all antenna panels 105.sub.i.
(43) Modular imaging system 100 can include sensor 125 such as an infrared (IR) camera, thermal camera, ultrasonic distance sensor, video camera, electro-optical (EO) camera, or surface/depth map camera. Sensor 125 creates an additional information image or video, such as an optical image, of at least the OD 107. In some implementations, sensor 125 transmits images or video via a USB connection to processing system 120 for further analysis. The modular imaging system 100 can include multiple sensors 125. Sensor 125 can also be used to detect for the presence of a target in the OD 107. Detecting the presence of a targeted in the OD 107 can be used to trigger RF scanning by the imaging system 100.
(44) Processing system 120 includes a number of modules for processing radar return data and additional information images from sensor 125 of the OD 107 including data acquisition process 130, calibration process 135, reconstruction process 140, automatic threat recognition process 145 and renderer 150.
(45) Data acquisition process 130 acquires raw data from the DAS base station 115 and additional information images from the sensor 125. For each sensor (e.g., antenna panel 105.sub.i and sensor 125), data acquisition process 130 acquires and normalizes the sensor data. Timing of the sensor data is synchronized across sensors and data acquisition process 130 publishes the acquired data as frames (e.g., time slices) for further analysis by modular imaging system 100. Thus, for a given frame, data acquisition process 130 publishes a set of data for each antenna panel 105.sub.i and sensor 125. In some implementations, data is acquired and frames are published at near video frame rates (e.g., approximately 24 frames per second).
(46) Calibration process 135 applies calibration to the published data.
(47) Reconstruction process 140 transforms the calibrated radar return data into images and/or feature maps using compressed sensing constraints. An image can be created for each antenna panel 105.sub.i, and/or based on a composite of measurements obtained by multiple antenna panels 105.sub.i. Because measurements of the OD 107 are sparsely acquired via antenna panels 105.sub.i, reconstructing an image of the OD 107 can be considered as finding solutions to an underdetermined linear system. Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal (e.g., an image of the target residing in OD 107), by finding solutions to underdetermined linear systems. The solution may be found using, e.g., matched-filter, least-squares, and like solution algorithms. Compressed sensing is based on the principle that, through optimization, the inherent information sparsity and a-priori knowledge of many features of that information when considering one has knowledge of items or subjects that may occupy the OD can be exploited to recover the images of interest from far fewer samples than required by the Shannon-Nyquist sampling theorem.
(48) Image data from the sensor 125 can be used to further enforce the sparsity constraint beyond that supplied by a-priori knowledge of items or subjects that may occupy the OD. Specifically, an image of the OD 107 acquired by sensor 125 can be used to determine a spatial location of the target (e.g., which voxels of the OD 107 the target resides in and which voxels of the OD 107 are empty). Empty voxels contain no scatterers and therefore can be considered zero for compressed sensing reconstruction (e.g., enabling better and/or quicker estimations of the solution to the underdetermined linear system).
(49) In addition, an appropriate sized OD 107 can result in a scene that is sufficiently sparse for compressed sensing reconstruction. For example, if an OD 107 is a volume that is 2 meters by 1 meter by 0.5 meters, and is divided into 8,000,000 voxels of 5 mm, a typical human located within this OD 107 would occupy only about 10% of the voxels at any moment (e.g., approximately 800,000 voxels). An image from a sensor 125 can be used to determine three-dimensional surfaces within the OD 107 volume and consequently which voxels the individual resides in. The empty voxels can be forced to zeros when reconstructing the radar return image while non-zeroed voxels can be altered during reconstruction (e.g., can be considered variables to find an optimal reconstructed solution to the underdetermined linear system).
(50) Reconstruction process 140 can reconstruct one or more images. For example, each panel can serve as a transmit/receive pair and can be treated independently. For N panels, there are N.sup.2 independent effective apertures, each with a unique center-of-mass. Reconstruction process 140 can reconstruct an image from each of these effective apertures. In addition, reconstruction process 140 can create aggregate images by combining multiple independent images. In addition, reconstruction process can treat all panels as one large sparse aperture and reconstruct a single image using the information acquired from all panels in the single aperture
(51) Reconstruction process 140 can generate feature maps from the reconstructed images. Feature maps can include scatterer return data or other characterizations or features of the radar return measurements. Statistical analysis can be performed across multiple images. Some example features include local surface normal, surface-width, surface smoothness/pointiness, summed magnitude, and the like. Other features are possible.
(52) Automatic threat recognition process 145 analyzes radar return images and/or feature maps for presence of threat objects. Threat objects can include dangerous items that an individual may conceal on their person, for example, guns, knives, and explosives. Automatic threat recognition process 145 may identify threats using, for example, a classifier that assesses the feature maps generated by reconstruction process 140. The classifier may train on known threat images.
(53) Renderer 150 generates or renders an image characterizing the outcome of the threat recognition 145 analysis. The image is rendered on display 155. For example, renderer 150 can illustrate an avatar of a scanned person and any identified threats. Renderer 150 can illustrate a characterization that automatic threat recognition 145 did not detect any threats.
(54)
(55) At 515, DAS base station 115 aggregates the digitized data from each of the DAS board 110.sub.i. In the example implementation of the modular imaging system 100 shown in
(56) At 520, data acquisition process 130 receives the aggregated data from DAS base station 115 and receives information images from sensor 125. Data acquisition process 130 synchronizes timing and publishes the acquired data in frames.
(57) At 525, calibration process 135 applies calibration to the published data on a sensor by sensor basis.
(58) At 530, reconstruction process 140 applies compressed sensing solving algorithms to the calibrated data to reconstruct images and/or feature maps. Reconstruction process 140 can reconstruct images for each transmit/receive pair of panels. In addition, reconstruction process 140 can determine a spatial location of a target in the OD from the image derived from sensor 125 and, based on the spatial location of the target, determine a sparsity constraint for use in the compressed sensing solving algorithms. Other sensors may yield other types of constraints/priors to aid the compressed sensing solving algorithms in similar ways. In some implementations, the image derived from sensor 125 is a surface map image or depth map image (e.g., generated by a surface map camera) that contains information relating to the distance of the surfaces of scene objects from the sensor 125 viewpoint or another viewpoint.
(59) At 535, automatic threat recognition process 145 analyzes the images and/or feature maps, for example, using a classifier, for the presence of threat objects. An indication or characterization of the presence of a threat object is provided to renderer 150, which, at 540 displays in display 150 the indication of the presence of the threat object on an avatar of the target in the OD 107.
(60)
(61) At 610, data characterizing radar returns are received. The radar returns having been measured by a plurality of antenna panels 105.sub.i comprising an array of antenna elements including at least two antenna elements that are sparsely distributed on the antenna panel 105.sub.i (e.g., separated by a spacing more than a half wavelength of the operating frequencies). The plurality of antenna panels 105.sub.i being configurable to be arranged to measure a target in an observation, for example, as described in the system of
(62) At 620, data characterizing an image containing the OD is received. The image can be measured by a sensor 125 having a field of view overlapping with the OD. The sensor 125 can include an infrared sensor, electro/optical sensor, surface map camera, and the like.
(63) At 630, data characterizing the radar returns can be generated as complex, phase coherent (e.g., in-phase and quadrature) data. The in-phase and quadrature data can be generated from the analog reception of the RF signal at antenna panel 105.sub.i and by comparing the analog reception against a reference RF signal whose comparative result is relayed to 110.sub.i were it is digitized.
(64) At 640, a spatial location (for instance) of the target can be determined using the data characterizing the sensor image (other types of information from other sensor may also be generated. The spatial location (or other information) of the target can define empty voxels and occupied voxels (e.g., voxels in which the target is present) (or other priors) in the OD.
(65) At 650, a radar return image of the target can be constructed using a sparsity constraint determined from the spatial location of the target (or other priors from other sensors). The sparsity constraint can include considering empty voxels to have zero values for compressed sensing reconstruction algorithms. Feature maps may be generated and the presence of threat objects in the OD may be detected for using the images and/or feature maps. In some implementations, a characterization of the detected threat objects may be displayed, for example, with an avatar of a person indicating the location of the threat object on the person.
(66) Although a few variations have been described in detail above, other modifications or additions are possible. For example, the number of antenna panels is not limited and some implementations may include any number of antenna panels, which may be configurable and/or reconfigurable based on the intended application. The antenna panels are not limited to a particular frequency, for example, antenna panels with different properties (operating frequencies, element locations, and the like) can be used. In some implementations, an already implemented system can have the antenna panels swapped or exchanged for antenna panels and DAS panels with differing properties (operating frequencies, element locations, and the like). Different compressed sensing reconstruction algorithms may be used and different features may be used for threat detection. The OD may be a single continuous region or multiple separate regions. Other implementations are possible.
(67) Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example implementations disclosed herein may include one or more of the following, for example, modular antenna panels can serve as building blocks to allow for optimal location. The location can be based on intended applications to better illuminate an individual being screened and eliminate blind spots, which can improve probability of detection while reducing false alarm rates. The modular antenna panels can be solid state and the system can have no moving parts, which increases image acquisition frame rates enabling walk-thru/walk-by and overt or covert operation versus conventionally deployed mechanically scanning devices, operational life and reduces maintenance costs. In some configurations, the modular antenna panels can screen individuals walking in near proximity, thereby eliminating the need for screened individuals to remain stationary during the imaging process. Compressed sensing can reduce the amount of data that is measured, which can reduce the amount of data processed, which can reduce system cost, required processing time, size (e.g., footprint), and the like. In addition, any number of modular antenna panels can be used, allowing the system to scale based on intended application. Additional antenna panels can improve resolution, while fewer antenna panels can lower cost.
(68) One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
(69) These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term machine-readable medium refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores.
(70) To provide for interaction with a user, one or more aspects or features of the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) or a light emitting diode (LED) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user may provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including, but not limited to, acoustic, speech, or tactile input. Other possible input devices include, but are not limited to, touch screens or other touch-sensitive devices such as single or multi-point resistive or capacitive trackpads, voice recognition hardware and software, optical scanners, optical pointers, digital image capture devices and associated interpretation software, and the like.
(71) In the descriptions above and in the claims, phrases such as at least one of or one or more of may occur followed by a conjunctive list of elements or features. The term and/or may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it is used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases at least one of A and B; one or more of A and B; and A and/or B are each intended to mean A alone, B alone, or A and B together. A similar interpretation is also intended for lists including three or more items. For example, the phrases at least one of A, B, and C; one or more of A, B, and C; and A, B, and/or C are each intended to mean A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together. In addition, use of the term based on, above and in the claims is intended to mean, based at least in part on, such that an unrecited feature or element is also permissible.
(72) The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims.