USING MACHINE-TRAINED NETWORK TO PERFORM DRC CHECK
20230267265 · 2023-08-24
Inventors
Cpc classification
G06F30/398
PHYSICS
G06N5/01
PHYSICS
International classification
G06F30/398
PHYSICS
Abstract
A method for performing pixel-based design rule checking (DRC) is described. This method is used to perform design rule checks for rectilinear and curvilinear designs. In some embodiments, the pixel-based approach is based on computational deep-learning. The pixel-based DRC method of some embodiments is more resilient to false positives than traditional geometric approaches, particularly for designs with curvilinear content, and the inference time remains constant, regardless of how many shapes exist in the design being checked, or how many polygon edges are needed to represent its curvature. The DRC method of some embodiments is implemented by highly parallel architectures (such as Graphics Processing Units (GPU) and Tensor Processing Units (TPU)) to improve processing throughput compared to traditional means.
Claims
1. A method for performing design rule checking (DRC) on a design comprising a plurality of shapes, the method comprising: receiving a first description of the design in a first non-pixelized format; producing, from the first description, a second description of the design in a second pixelized format; using the second description to provide input to a machine-trained network to process in order to identify a DRC violation in the design; and based on output produced by the machine-trained network, identifying DRC violations in the design.
2. The method of claim 1 wherein the DRC violations are initially expressed in a pixel-based format, the method further comprising generating, for the identified DRC violations that are specified in the pixel-based format, contoured shapes to display with the design, and displaying the design with the contoured shapes in order to identify locations in the design that have DRC violations.
3. The method of claim 2, wherein the contoured shapes are displayed along with the design by a geometry-based design editing or visualization tool.
4. The method of claim 1, wherein the machine-trained network is a neural network.
5. The method of claim 1, wherein the shapes comprise rectilinear shapes and curvilinear shapes.
6. The method of claim 5, wherein each rectilinear shape is formed by Manhattan edges, each curvilinear shape is formed by at least one curvilinear edge, and the shapes further comprise shapes with at least one non-Manhattan rectilinear edge that has a 45-degree angle or another angle other than 0, 45, or 90.
7. The method of claim 1, wherein said producing comprises using the first description to rasterize the design to obtain the second pixelized format in which pixel values are used to describe the design.
8. The method of claim 7, wherein the first description comprises a description of shapes as polygons.
9. The method of claim 1, wherein the machine-trained network implements a DRC checking process.
10. The method of claim 1, wherein the machine-trained network partially implements a DRC checking process along with a pixel-based process.
11. The method of 10, wherein the pixel-based process comprises a morphological image-processing process.
12. A non-transitory machine-readable medium storing a program, which when executed by at least one processing unit of a computer, performs design rule checking (DRC) on a design comprising a plurality of shapes, the program comprising sets of instructions for: receiving a first description of the design in a first non-pixelized format; producing, from the first description, a second description of the design in a second pixelized format; using the second description to provide input to a machine-trained network to process in order to identify a DRC violation in the design; and based on output produced by the machine-trained network, identifying DRC violations in the design.
13. The non-transitory machine-readable medium of claim 12 wherein the DRC violations are initially expressed in a pixel-based format, the program further comprising a set of instructions for generating, for the identified DRC violations that are specified in the pixel-based format, contoured shapes to display with the design, and displaying the design with the contoured shapes in order to identify locations in the design that have DRC violations.
14. The non-transitory machine-readable medium of claim 13, wherein the contoured shapes are displayed along with the design by a geometry-based design editing or visualization tool.
15. The non-transitory machine-readable medium of claim 12, wherein the machine-trained network is a neural network.
16. The non-transitory machine-readable medium of claim 12, wherein the shapes comprise rectilinear shapes and curvilinear shapes.
17. The non-transitory machine-readable medium of claim 16, wherein each rectilinear shape is formed by Manhattan edges, each curvilinear shape is formed by at least one curvilinear edge, and the shapes further comprise shapes with at least one non-Manhattan rectilinear edge that has a 45-degree angle or another angle other than 0, 45 or 90.
18. The non-transitory machine-readable medium of 12, wherein the machine-trained network produces a single output representative of design violations for a single design rule.
19. The non-transitory machine-readable medium of 12, wherein the machine-trained network produces multiple outputs representative of design violations for multiple design rules.
20. The non-transitory machine-readable medium of claim 19, wherein the multiple design rules comprise at least two of DRC rule constraints regarding widths of shapes, DRC rule constraints regarding spacing between two shapes, and DRC rule constraints regarding an amount by which one shape encloses another.
Description
BRIEF DESCRIPTION OF FIGURES
[0023] The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
DETAILED DESCRIPTION
[0056] In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
[0057] Some embodiments of the invention provide a method for performing pixel-based design rule check (DRC). The method of some embodiments is used to perform design rule checks for rectilinear and curvilinear designs. The method of some embodiments uses a machine-trained network (e.g., a trained convolutional neural network) to perform the pixel-based processing. In some embodiments, the machine-trained network is trained through a deep learning process that uses data from one or more different DRC methods (such as traditional (geometric), equation-based or circle-tracing methods) to produce the data used for the training.
[0058] Once trained, the method of some embodiments uses the machine-trained network (e.g., the neural network) to infer DRC errors for rasterized images of designs containing rectilinear and curvilinear content that it has not seen before. The rasterized DRC errors are then converted back to the geometry domain for display in a design editing or viewing tool, for example by overlaying them upon the original design. Some embodiments use a single machine-trained network (e.g., the neural network) that is trained to handle multiple types of DRC at once, while other embodiments use multiple machine-trained networks (e.g., multiple neural networks) to run in parallel, each running as few as one, or perhaps multiple DRC checks.
[0059] The pixel-based DRC that is performed by the machine-trained networks of some embodiments is more resilient to false positives than the geometric approach, particularly for designs with curvilinear content, and the inference time remains constant, regardless of how many shapes exist in the design being checked, or how many polygon edges are needed to represent its curvature. The inference time (output time) is further enhanced in some embodiments by using highly parallel architectures (such as Graphics Processing Units (GPU) and Tensor Processing Units (TPU)) for the processing of the machine-trained network.
[0060]
[0061] Rasterization is the task of taking an image in which shapes or their contours are defined in one format (e.g., in a vector graphics format) and converting the image into a raster image in which each shape or its contours is/are defined by reference to a series of pixels, dots or lines, which, when displayed together, create the image that was originally represented by the shapes. In some embodiments, the rasterized images are defined in terms of pixels that are displayed on a computer display, video display or printer, or stored in a bitmap file format. As such, rasterization in some embodiments refers to the technique of drawing 3D models, or the conversion of 2D rendering primitives (such as polygons, line segments, etc.) into a rasterized format (e.g., into a pixel-based definition of those models or primitives).
[0062] In some embodiments, DRC error markers are created where DRC errors exist (as determined by traditional methods) and are rasterized to images of a given pixel size. The input and output raster images are then used to train the neural network. Once trained, the neural network is used to infer DRC errors for rasterized images of designs containing rectilinear and curvilinear content that it has not seen before. The rasterized DRC errors are then converted back to the geometry domain via a contouring operation. This step allows the visualization or display of the DRC error markers in a geometry-based design editing or viewing tool, for example by overlaying them upon the original design. In some embodiments, the ‘marching squares’ process (e.g., marching-square algorithm) is used during contouring to achieve this transformation.
[0063] In some embodiments, the overall process involves a rasterization step to move from the geometry domain to the pixel domain. This rasterization step does have some associated cost. Hence, it is beneficial to operate as much in the pixel domain as possible thereafter. This allows the cost of rasterization to be amortized over other operations performed within the pixel domain, and the entire flow benefits significantly from pixel-friendly hardware architectures such as GPUs and TPUs. Some embodiments provide a method for performing DRC operations in the pixel space using deep learning. Also, some embodiments augment the deep learning approach with other pixel-based approaches, creating a hybrid method. For example, the DRC rule checks in some embodiments are fully or partially implemented using a deep learning approach, while others are fully or partially implemented by other pixel-based approaches (such as by using standard image-processing programs) which are not deep-learning based.
[0064] In some embodiments, deep learning-based approaches are augmented by other pixel-based methods such as filtering, or morphological image processing methods. High-pass filtering is used to enhance rapidly changing areas of the image most often associated with the edges of the image (such as the edges of the post-rasterized polygons). Morphological image processing includes dilation and erosion where dilation operation adds pixels to the boundaries of the object in an image, and erosion operation removes the pixels from the object boundaries. Morphological image processing events in some embodiments are used to dilate objects within the image until they touch, at which point if the number of dilation steps exceeds a certain minimum, the objects within the image are deemed as having insufficient spacing.
[0065] GPUs and TPUs utilize highly parallel architectures. While a CPU is excellent at handling one set of very complex instructions, a GPU or TPU is very good at handling many sets of very simple instructions, such as those related to neural network processing. Pixel-based methods such as neural networks therefore advantageously use the high degree of parallelism present in GPU and TPU devices to perform their processing rapidly, and are used in some embodiments to accelerate curvilinear design rule checking operations.
[0066]
[0067] After 605, the process 600 forks into two sub-processes. The first sub-process includes operations 620 and 625 that generate known inputs X for neural network training at 630. The second sub-process includes operations 615, 622 and 627 for generating several known outputs Y each associated with a known input X. Specifically, at 610, the process 600 performs a DRC check operation on the generated design. This DRC check operation in some embodiments uses a known DRC techniques, such as a traditional geometric means, equation-based means, circle tracing, or any other means.
[0068] The process 600 then identifies (at 615) output polygons produced by DRC checks. Both the original design and the DRC polygons are rasterized (at 620 and 622, respectively) to images. The process 600 then groups (at 625 and 627, respectively) the rasterized image of the design and the DRC polygons into tiles, which correspond to smaller portions of the overall IC design. Splitting the IC design into smaller pieces is advantageous as these smaller designs are more suitable for processing (at 630) by the neural network. Some embodiments perform the process 600 as many times as needed for as many IC designs as needed in order to sufficiently train the neural network. In some embodiments, the neural network is trained using the information from just one design, while in other embodiments, the neural network is trained by using information from multiple designs. After 630, the process 600 ends.
[0069] The collected tiles in some embodiments are stored on a disk as individual image files, or in a database, or any other appropriate form for neural network training. When the design contains multiple design layers, each layer is rasterized individually in some embodiments. The resulting single-layer raster images in some embodiments are stored separately, or combined into multiple-channel raster images and essentially stored together in other embodiments. As shown in
[0070] To train the neural network, some embodiments feed each known input (a rasterized input pattern from the X data) through the neural network to produce a predicted output Y′, and then compare this predicted output Y′ to the known output Y (e.g., DRC polygon) of the input to computer a set of one or more error values (e.g., compute a difference value based on the difference between the known output and the predicted output). The error values for a group of known inputs/outputs are then used to compute a loss function (such as a cross-entropy loss function described below), which is then back propagated through the neural network to train the configurable parameters (e.g., the weight values) of the neural network. Once trained by processing a large number of known inputs/outputs, the trained neural network can then be used (as described above by reference to
[0071] In some embodiments, single layer design data ‘X’ are produced from randomly generated Manhattan and/or diagonal shapes of various dimensions and at various locations.
[0072] In some embodiments, curvilinear data are generated from rectilinear/Manhattan and/or diagonally generated data by applying different transformations. Manufacturing process simulation software in some embodiments is used to achieve the transformation, where for example, the input data to the simulators represent a set of Manhattan, rectilinear, and/or diagonal shapes which are to be manufactured using a semiconductor manufacturing process, and the output shapes produced by the software are the corresponding shapes that are expected to be manufactured, given the limitations of the manufacturing process. In other embodiments, a (different) appropriately trained neural network is used to determine the transformation to curvilinear shapes. For example, when the curvilinear shapes represent the outputs of a semiconductor manufacturing process, the trained neural network disclosed in U.S. Pat. Application No. 16/949,270, now published as U.S. Pat. Publication 2022/0128899 is used in some embodiments to determine the curvilinear shapes.
[0073] For multiple-layer DRC rules, multiple-layer design data ‘X’ in some embodiments are also produced from randomly generated Manhattan and/or diagonal shapes of various dimensions and at various locations.
[0074] An application in semiconductor manufacturing corresponds to the manufacturing of metal shapes which need to fully enclose a via cut layer, when transitioning a conductor from one metal layer to another.
[0075] It is common in rectilinear semiconductor designs for via cut shapes to be a square 102 as shown in
[0076]
[0077] While designed rectilinear vias in semiconductor devices will tend to be square or rectangular in shape, some embodiments are not limited to these shapes only. Instead, some embodiments generate multiple layer data with a variety of shapes to expose the neural network to a variety of such shapes during training, in order to allow the trained network to generalize better, and to allow it to be used in other problem domains in which more complex multiple-layer curvilinear shapes are encountered.
[0078]
[0079] Labeled data ‘Y’ corresponding to DRC violation markers in some embodiments are produced from the inputs ‘X’ by way of a DRC checking step. Any DRC mechanism such as traditional geometry-based DRC checking, equation-based checking, or the circle-tracing methods discussed previously may be used.
[0080]
[0081]
[0082] As noted previously, ‘false positive’ DRC markers in some embodiments are inadvertently created when performing DRC checks upon certain designs, particularly those with curvilinear content. This is largely due to the ‘snapping’ of geometric coordinates to a grid system, common in state-of-the-art geometry editing tools such as a circuit design layout editor.
[0083] In some embodiments, DRC markers (which survive the filtering step above) are created with at least a minimum size to facilitate their rasterization and learning during neural network training. In other embodiments, DRC markers are intentionally oversized to achieve the same goal. For example, the DRC marker polygons are oversized by one pixel dimension value in each edge, where the pixel dimension corresponds to the pixel dimension used when subsequently rasterizing the images. A pixel size of 8 nm in some embodiments is used during rasterization, hence the oversizing amount is 8 nm for each edge of the DRC marker polygons. Other oversize amounts are used without departing from the spirit of some embodiments of the invention. One reason for oversizing the DRC markers is to ensure that they are still clearly present after rasterization, i.e., clearly visible in the rasterized images. For example, in some embodiments, DRC marker polygons that are sub-pixel in dimension (e.g., a small 5x6 nm DRC marker) are not particularly visible in grey-scaled rasterized images if larger pixels sizes (such as 8x8 nm) are used in the rasterization process. The DRC markers so-produced in this process are referenced as ‘ground truth’ in this document.
[0084]
[0085]
[0086]
[0087]
[0088]
[0089] This architecture modifies the U-Net architecture (used for biomedical image segmentation) in several ways. First, the input images are 256x256 in the height and width dimension, unlike those of 572x572 in size. Likewise, the output image dimensions are 256x256, rather than those of 388x388. This is due to the use of padded convolutional operations, as opposed to the un-padded operations. Furthermore, the network comprises three down-sampling steps only, compared with 4. Another change is that the initial set of convolution operations use a filter depth of 32, unlike the 64. These changes allow the network to be much smaller in terms of its number of trainable parameters, and still produce outputs (DRC markers) which are sufficiently accurate. As a result, the network is also faster to train and faster to evaluate.
[0090] Finally, the output layer is very different. Rather than using a softmax activation function output in combination with a cross entropy-based loss function, in some embodiments, a linear activation function output is used in combination with mean-squared error loss function. The output produced by the original U-Net is essentially a Boolean output per-pixel (each pixel is either fully part of a segmentation class or it is not), whereas the network in some embodiments of the present invention acts as a regression application, predicting pixel values that lie anywhere between 0.0 and 1.0 per pixel. The regression application approach allows for more fine-grained accuracy in computing the contours later (the contours are not snapped to pixel edges), and also tends to suffer less from issues with learning/predicting DRC markers which are as small as 1 pixel (8 nm) per side.
[0091] For multiple-layer DRC rules, the number of channels is expanded in the input image. For a minimum-enclosure rule, which involves two layers, the input tiles are 256x256x2 (using a channels-last representation), which has two channels (for example, one channel for the inner layer, and one for the outer layer).
[0092] In some embodiments, a dedicated neural network is assigned to each type of DRC rule. If there are N DRC rules, then there are N dedicated neural networks, each with its own individual set of weights learned during training. In other embodiments, a single neural network is used for processing multiple DRC rules at once, by adding additional output channels.
[0093] In some embodiments, the output(s) produced by the neural network are considered as surfaces (like mountain ranges), with peaks (mountain tops) corresponding to DRC violation marker locations. This is achieved by using a linear output activation function, as opposed to the sigmoid activation function used by the original biomedical U-Net application. Contour operations in some embodiments are used to convert the surface peak images produced by the trained neural network into DRC marker polygons in geometric form, which are then readily viewed in geometry-based design editing tools such as integrated circuit layout editors, etc.
[0094] Many thousands of data sample (X, Y) tile pairs are generated using the system discussed previously in order to train the neural network. These tiles in some embodiments are split into multiple databases, with a large portion (e.g. 80%) of the tiles being saved to a ‘training’ database, and a smaller portion (e.g. 15%) stored to a ‘validation’ database. The remaining portion (e.g. 5%) in some embodiments is stored in a test database. In some embodiments, a HDF5 file format is used to store this database, though other file/database formats could be used without departing from the spirit of the art. The training database examples are used to teach the network about the relationship between X (design data layers, rasterized) and Y (DRC violation markers, rasterized), using standard techniques familiar to those skilled in the art of deep learning. The examples from the validation database in some embodiments are used to evaluate the progress of the training. The “training” data set is the general term for the samples used to create and tune the model, while the “validation” data set is used to qualify performance.
[0095]
[0096]
[0097]
[0098] In both images, the lighter shade shapes 2612 (e.g., displayed as orange on a display screen in some embodiments) represent the CAD data that is the same in both images. The left image 2602 contains ground truth DRC violation markers 2614, which appear as darker shade shapes (e.g., displayed as blue on a display screen in some embodiments). These markers 2614 are obtained using a geometry-based DRC engine. The right image 2604 is reconstructed from the trained neural network output. This image 2604 contains predicted DRC violation markers 2622, which appear as darker shade shapes (e.g., displayed as red on a display screen in some embodiments). At the high-altitude zoom level shown in the figure, both images 2602 and 2604 appear essentially identical with the DRC markers 2614 and 2622 appearing at the same locations in both images.
[0099]
[0100]
[0101] In both images, the lighter colored shapes 2812 (e.g., lighter grey shapes in the figure that are displayed as orange shapes on the display screen in some embodiments) represent the design data for the outer layer, which is the same in both left and right images 2802 and 2804. Also, in both images, the darker-colored shapes 2814 (some shown with left-to-right cross hatching) represent the design data for the inner layer. The design rule checks that the outer layer overlaps the inner layer with a minimum enclosure of 20 nm. Though hard to see at this high-altitude zoom level, the design data in both images is curvilinear, which will be appreciated in the zoomed-in (low-altitude zoom) images shown later. The left image 2802 contains ground truth DRC violation markers 2816 (e.g., darkest shade of grey shapes that are displayed as blue markers on a display screen in some embodiments). These markers 2816 are obtained using a geometry-based DRC engine. The right image 2804 is reconstructed from the trained neural network output. This image 2804 contains predicted DRC violation markers 2818 (some shown with right-to-left cross hatching), which in some embodiments are displayed as red markers on the display screen. At the high-altitude zoom level shown in the figure, both images 2802 and 2804 again appear essentially identical. DRC markers appear at the same locations in both images.
[0102]
[0103]
[0104]
[0105] Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
[0106] In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
[0107]
[0108] The bus 3205 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 3200. For instance, the bus 3205 communicatively connects the processing unit(s) 3210 with the read-only memory (ROM) 3230, the system memory 3225, and the permanent storage device 3235. From these various memory units, the processing unit(s) 3210 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.
[0109] The ROM 3230 stores static data and instructions that are needed by the processing unit(s) 3210 and other modules of the electronic system. The permanent storage device 3235, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 3200 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 3235.
[0110] Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 3235, the system memory 3225 is a read-and-write memory device. However, unlike storage device 3235, the system memory is a volatile read-and-write memory, such a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention’s processes are stored in the system memory 3225, the permanent storage device 3235, and/or the read-only memory 3230. From these various memory units, the processing unit(s) 3210 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
[0111] The bus 3205 also connects to the input and output devices 3240 and 3245. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 3240 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 3245 display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
[0112] Finally, as shown in
[0113] Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
[0114] While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
[0115] As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.
[0116] While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, a number of the figures conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Therefore, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.