Cloud-Based Multi-Camera Quality Assurance Lifecycle Architecture
20230143402 · 2023-05-11
Inventors
- Kyle Bebak (Mexico City, MX)
- Eduardo Mancera (Mexico City, MX)
- Milind Karnik (Mission Viejo, CA, US)
- Arye Barnehama (Pasadena, CA, US)
- Daniel Pipe-Mazo (Redondo Beach, CA, US)
Cpc classification
G05B23/0245
PHYSICS
G05B19/41815
PHYSICS
G06V20/95
PHYSICS
G06V20/52
PHYSICS
G05B2219/32197
PHYSICS
G05B2219/32179
PHYSICS
G05B2219/31447
PHYSICS
G06V10/25
PHYSICS
International classification
G05B19/418
PHYSICS
Abstract
Data is received that includes a feed of images of a plurality of objects passing in front of each of a plurality of inspection camera modules forming part of each of a plurality of stations. The stations can together form part of a quality assurance inspection system. The objects when combined or assembled, can form a product. The received data derived from each inspection camera module can be separately analyzed using at least one image analysis inspection tool. The analyzing can include visually detecting a unique identifier for each object. The images are transmitted with results from the inspection camera modules along with the unique identifiers to a cloud-based server to correlate results from the analyzing for each inspection camera module on an product-by-product basis. Access to the correlated results can be provided to a consuming application or process via the cloud-based server.
Claims
1. A computer-implemented method for providing quality assurance comprising: receiving, for each of a plurality of stations, data comprising a feed of images of a plurality of objects passing in front of one or more inspection camera modules within the station, each image having a corresponding timestamp or identifier, the objects comprising subassemblies, subcomponents, or intermediate versions forming at least a portion of a product; separately analyzing the received data from each inspection camera module at least one image analysis inspection tool comprising an ensemble of different machine learning-based image analysis inspection tools; correlating result from the analyzing, on a product-by-product basis, for each inspection camera module from the plurality of stations such that results across multiple stations can be viewed and processed in aggregate for each of the objects forming part of at least the portion of the product; and providing access to the correlated results to a consuming application or process.
2. The method of claim 1, wherein the stations belong to a single manufacturing line within a single manufacturing facility
3. The method of claim 1, wherein the stations belong to multiple manufacturing lines within a single manufacturing facility.
4. The method of claim 1, wherein the stations belong to multiple manufacturing lines across multiple manufacturing facilities.
5. The method of claim 1, wherein all of the objects forming the product have a single unique identifier which is used to correlate the results.
6. The method of claim 1, wherein the objects forming the product have varying identifiers, and wherein the correlation of results utilizes a set of user-provided rules to group the identifiers received to the product.
7. The method of claim 6, wherein a first station of the plurality of stations detects a first identifier and a second station of the plurality of stations detects a second identifier different from the first identifier.
8. The method of claim 5, wherein the correlation of results further utilizes a timestamp associated with each image that is particular to the station capturing such image.
9. The method of claim 7, wherein the correlation of results further utilizes a timestamp associated with each image that is particular to the station capturing such image.
10. The method of claim 1, wherein the objects comprise (i) a final assembly or packaged version of the product, (ii) a partial assembled or packaged version of the product or a portion of the product, or (iii) subassemblies to combine to form the product.
11. The method of claim 1 further comprising: generating, based on the correlated results, an inspection result for each object characterizing whether such objects are defective or aberrant.
12. The method of claim 11, wherein the generating uses a set of rules to determined that the object is defective or aberrant based on inspections of varying areas of interest (AOI) in the images, wherein if one AOI is deemed to be defective or aberrant, the object is characterized as being defective or aberrant.
13. The method of claim 1 further comprising: causing the correlated results to be stored in a remote cloud-based database.
14. The method of claim 1 further comprising: causing the correlated results to be stored in a local database.
15. The method of claim 1, wherein each image analysis inspection tool comprises a machine learning model trained for a particular one of the two or more inspection camera modules.
16. The method of claim 1, wherein the objects are moved in front of the inspection camera modules via one or more conveyance mechanisms.
17. The method of claim 1, wherein the inspection camera modules utilize a same type of trigger to capture the respective feed of images.
18. The method of claim 1, wherein at least two of the inspection camera modules utilize a different type of trigger to capture the respective feed of images.
19. The method of claim 18, wherein the type of triggers comprise: hardware triggers and software triggers.
20. The method of claim 19, wherein the software triggers utilize machine learning to determine when to capture an image for the feed of images.
21. The method of claim 1, wherein each of the inspection camera modules are connected to a single computing device having a clock, and wherein the method further comprises: assigning a timestamp to each image using the clock; wherein the correlating uses the assigned timestamps to associate images for a particular object.
22. The method of claim 1, wherein two or more of the inspection camera modules are connected to different computing devices each initially having a respective, non-synchronized clock, and wherein the method further comprises: synchronizing the clocks of the different computing devices; and assigning a timestamp to each image using the corresponding clock for the computing device to which the respective inspection camera module is connected. wherein the correlating uses the timestamps to associate images for a particular object.
23. The method of claim 22, wherein the clocks are synchronized using a local timeserver.
24. The method of claim 22, wherein the clocks are synchronized using a remote, Internet-based timeserver.
25. The method of claim 1 further comprising: assigning a counter value to each image; wherein the correlating uses the assigned counter values to associate images for a particular object.
26. The method of claim 1 further comprising: applying a timing offset to timestamps for images generated by one of the inspection camera modules based on a distance of such inspection camera modules relative to the other inspection camera modules; wherein the correlating uses the timestamps after application of the timing offset to associate images for a particular object.
27. The method of claim 1 further comprising: detecting, by one or more image analysis inspection tools, a unique identifier for each object; wherein the correlating uses the detected unique identifiers to associate images for a particular object.
28. The method of claim 27, wherein the unique identifier comprises one or more of a barcode, an alphanumeric string, or an identifier generated by a production line controller (PLC).
29. A computer-implemented method for providing quality assurance comprising: receiving data comprising a feed of images of a plurality of objects passing in front of each of a plurality of inspection camera modules forming part of each of a plurality of stations, the stations together forming part of a quality assurance inspection system, the objects when combined or assembled, forming a product; separately analyzing the received data from each inspection camera module using an ensemble of different machine learning-based image analysis inspection tools, the analyzing comprising visually detecting a unique identifier for each object; transmitting the images with results from the inspection camera modules and the unique identifiers to a cloud-based server to correlate results from the analyzing for each inspection camera module on an product-by-product basis based on the identifiers of the objects; and providing access to the correlated results to a consuming application or process via the cloud-based server.
30. A computer-implemented method for providing quality assurance comprising: receiving data comprising a feed of images of a plurality of objects passing in front of each of a plurality of inspection camera modules forming part of each of a plurality of stations, the stations together forming part of a quality assurance inspection system, the objects when combined or assembled, forming a product, each of the images having a corresponding timestamp; separately analyzing the received data from each inspection camera module using at least one image analysis inspection tool comprising an ensemble of different machine learning models; transmitting the images along with results from the inspection camera modules and the timestamps to a cloud-based server to correlate results from the analyzing for each inspection camera module on an product-by-product basis for all of the objects which, when combined or assembled, form the corresponding product; and providing access to the correlated results to a consuming application or process.
Description
DESCRIPTION OF DRAWINGS
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059] Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION
[0060] The current subject matter is directed to a multi-camera architecture for identifying anomalous or other aberrations on objects within images with particular application to quality assurance applications such as on production lines, inventorying, and other supply chain activities in which product/object inspection is desirable. The techniques herein leverage computer vision, machine learning, and other advanced technologies. The techniques encompass both hardware and software methodologies with a shared primary goal of making camera-based quality inspection systems easier to use. Ease of use can be achieved through methodologies including removing the need for commonly used hardware components, including multiple variants of hardware components and allowing the user to switch between them via a software interface, and visualizing the output and/or decisions of complex algorithmic processes such as machine learning algorithms in order to make the system interface more interpretable to an average user. Further, the generated data can be stored locally, remotely (e.g., in a cloud computing system, remote database, etc.) and/or stored on a combination of same.
[0061]
[0062] The objects 120 can either be partially completed versions of a “final object” being produced, subassemblies to be used in production of a “final object”, or the “final object” itself. Stated differently, the current subject matter is applicable to finished products as well as the various components making up the products throughout their respective manufacturing line processes. As the objects 120 pass through the inspection line process they may be modified, added to, and/or combined, and it is interesting to the end user to be able to correlate pictures of these various objects throughout the inspection line process. The object and/or its subassemblies may be processed at multiple different locations at various points in time, and the system described provides a technique to correlate all images of the final object across these points in space and time.
[0063] While the example of
[0064]
[0065]
[0066] Advances in manufacturing allow for manufacturing processes to handle objects ranging from raw materials to complex electrical assemblies and the like. For example, a manufacturing process can include inputs such as components, raw materials, etc. being input to a single manufacturing line 170 and being output as a final product. These inputs can also include a partially manufactured object or objects. The inputs to a manufacturing line 170 are sometimes referred to herein as “manufacturing inputs.
[0067] As noted above, a manufacturing process can include multiple manufacturing lines 170 which, in turn, can be in adjacent or non-adjacent physical locations. The non-adjacent physical locations can be within the same manufacturing facility or within multiple manufacturing facilities. The output of an initial manufacturing line 170 can be processed immediately or soon thereafter through one or more subsequent manufacturing lines 170, or the output can be processed through one manufacturing line 170 and stored so that it can be subsequently be processed in the subsequent manufacturing lines. The subsequent manufacturing line 170 can perform further modifications or improvements on the output from the first manufacturing line 170. Other variations are possible in which differing manufacturing lines 170 generate different objects 120 (e.g., different components, sub-assemblies, etc.) at different insertion points into an overall manufacturing process. Further, one or more of the manufacturing lines 170 can have a corresponding station 180.
[0068]
[0069]
[0070]
[0071]
[0072] Within a station 180, not all inspection camera modules 130 need to detect the identifier used for cross-station correlation. Correlation between the inspection camera modules 130 in a station 180 can be done utilizing synchronized timestamp or other methods discussed later. As long as one inspection camera module 130 within the station receives the unique identifier, same or otherwise, to be used for correlation, the final output correlation can be produced utilizing all results from all inspection cameras in all stations.
[0073] Historical data can be saved locally on the camera system 130 and/or stored in a cloud database. This data can be correlated such that the various views of the objects 110 can be easily obtained for subsequent analysis or other processing. With the variation in
[0074] One of the issues addressed with the current subject matter is the correlation of data obtained by multiple inspection camera modules 130.sub.1 . . . n so that relevant information about the objects 120 can be used by a consuming application or process such as historical review of manufacturing practices, etc. In the variation of diagram 1000 of
[0075] In the variation of diagram 1100 of
[0076] In the variation of diagram 1200 of
[0077] In the variation of diagram 1300 of
[0078] Results from the pipeline are now published to a “cloud application”, where the results contain all of the data that they did previously, but now have this additional synchronized timestamp and/or unique identifier (as described below).
[0079]
[0080] The image sensor 1410 can assign a timestamp 1412 to each raw image 1415 which is based on a local clock and/or local counter running a certain frequency, etc. In some cases, this timestamp 1412 is not synchronized to any other systems. In such cases, the vision processing pipeline 1420 can perform operations so as to align the image timestamp 1412 and to a synchronized clock 1414. These operations can, include, performing synchronization using various protocols including NTP, SNTP, PTP, and the like.
[0081] Aspects which define the boundaries of the AOIs (which can be static or dynamic based on the particular raw image 1415) can be specified within an inspection routine configuration 1435. An AOI as used herein can be specified as a region (x, y, width, height) within an image that should be further analyzed. In some cases, if there are multiple AOIs, one or more of such AOIs can overlap.
[0082] The inspection routine configuration 1435 can also specify which of image analysis inspection tools 1440.sub.1, 1440.sub.2 is to analyze the corresponding AOI of the raw image 1415. The vision processing pipeline 1420 can cause the AOIs 1430.sub.1, 1430.sub.2 to be respectively passed or otherwise transmitted to or consumed by the different image analysis inspection tools 1440.sub.1, 1440.sub.2. Each of the image analysis inspection tools 1440.sub.1, 1440.sub.2 can generate information complementary to the object within the raw image 1415 which can take the form of a respective overlay 1445.sub.1, 1445.sub.2. Such complementary information can take various forms including, for example, various quality assurance metrics such as dimensions, color, and the like as well as information as to the explainability of the decisions by the image analysis inspection tools 1440.sub.1, 1440.sub.2 (e.g. why a machine learning model believes an item to be defective and/or to the extent of the defective region found on the product, etc.)
[0083] The vision processing pipeline 1420 can generate a composite overlay 1450 based on the respective overlays 1445.sub.1, 1445.sub.2. The weighting and/or transparency in which the overlays 1445.sub.1, 1445.sub.2 can be combined can be pre-specified in some cases. The vision processing pipeline 1420 can then combine the composite overlay 1450 with the raw image 1415 to result in a composite object image 1455. That composite object image 1455 can then be compressed or otherwise encoded 1460 and then published 1465 for access by a cloud application 1470. In some cases, the cloud application 1470 can correspond to a product line visualization system.
[0084] The published information sent to the cloud application 1470 can include one or more of: explainability/complementary information, visual overlays as well as information from the image analysis inspection tools 1440.sub.1 . . . n. The image analysis inspection tools 1440.sub.1 . . . n can specify one or more of: results (e.g., pass/fail, etc.) for each AOI, tool metadata (e.g., detailed information about the result of the tool including explainability information), read bar codes, read text (via OCR), the confidence of utilized machine learning/computer vision models, and the synchronized timestamp for each picture.
[0085] The image analysis inspection tools 1440 can take various forms including, for example, computer vision or machine learning algorithms whose function is either to modify the raw image for the purpose of allowing other tools to inspect it, or which consume an AOI and provide quality inspection analysis and complementary information back to the vision processing pipeline (such as tools 1440.sub.1 and 1440.sub.2) in
[0086] Image analysis inspection tools can be configured by the user. A part of the configuration may be an image or set of images, referred to herein as reference image or images, which the user believes are standard, typical, or otherwise exemplary images of the product with respect to the total corpus of images which may be obtained of the product during the quality assurance inspection application. Further, a part of the configuration may be an image or set of images, referred herein to as the training image or images, which the user labels or otherwise marks, which are to be used in conjunction with an image analysis inspection tool which, as part of its configuration, requires the training of a computer vision or machine learning model. A user label or mark on the training images may be “pass” or “fail” to indicate if the image is that of a product which should be considered to be passing or failing by the image analysis inspection tool. The label or mark may also be that of a particular class, where a class may be a single descriptor that is a member of a set of descriptors which can be used to describe an image of the product being inspected. An example of a class may be “A”, where the set of classes may be [“A”, “B”, “C”], if the image analysis inspection tool is being configured to determine if product variant “A”, “B”, or “C” is present in the image.
[0087] When an image analysis inspection tool 1440, which has been configured with a reference image or images, a training image or images, or all of the preceding, is producing quality assurance metrics on an image or feed of images 1415, it is optimal for the image or feed of images 1415 to be visually similar to the reference image or images and/or the training image or images. The closer the visual similarity between the image 1415 and the reference and/or training images, the more likely the image analysis inspection tool will perform its function properly. Machine learning models, in particular, can often perform poorly on “out of sample” images, where “out of sample” images are images on which the model has not been configured or trained. It can be useful to come up with a score, hereafter referred to as the “visual similarity score”, which can be a floating-point or integer number which represents how similar an image 1415 is to the set of reference and/or training image or images on which the image analysis inspection tool was configured. The visual similarity score may be measured through a variety of methods. One basic method may be a mathematical algorithm which analyzes the average color value of the pixels of the image 1415 and compares this to the average pixel value of the training and/or reference image or images to determine the score. Another more advanced method may utilize a statistical model to generate a probability that the image 1415 is a member of the distribution of reference and/or training images on which the image analysis inspection tool has been configured, where this probability or a linearly scaled representation of the probability, may then be used as the visual similarity score. The visual similarity score may be used as an input to the inspection tool 1440, but it may also be used in other areas within the vision processing pipeline, such as a software-based trigger module as described below.
[0088] The image analysis inspection tools 1440 implement a standardized application programming interface (API) for receiving commands and input data, such as AOIs 1430, from the vision processing pipeline 1420, and returning quality assurance metrics and results including overlays 1445. The image analysis inspection tools 1440 can each run in their own host process or thread on the camera system edge computer and the API utilizes inter-process communication methods to be able to transfer the commands and data between the vision processing pipeline 1420 and the image analysis inspection tools 1440. Inter-process communication methods include but are not limited to shared memory, pipes, sockets (TCP, UDP or UNIX), kernel data structures such as message and event queues, and/or files. Any image analysis inspection tools 1440 which conforms to and implements the specified API which the vision processing pipeline 1420 expects, utilizing the specified inter-process communication mechanism, can be used to analyze the corresponding AOI of the raw image 1415 and return quality assurance metrics including overlays 1445. Further, the tools can be fully containerized, in which the tool implementation, referred to herein as software code, runtime requirements and dependencies, and associated metadata for the image analysis inspection tools 1440 are developed and downloaded or otherwise loaded onto the camera system fully independently from the remainder of the vision processing pipeline 1420. Containerization of the tool implementation can utilize technologies such as docker, lxc, or other linux containers to package the software code and dependencies. The associated metadata portion of the tool implementation may include a single file or set of files, where the file may be any format but may specifically be a compressed or uncompressed archive format such as .zip, .tar or .7z. When the vision processing pipeline 1420 is commanded to begin inspecting raw images 1415, it first checks the inspection routine configuration 1435 to determine which tool implementations are required for the image analysis inspection tools 1440 specified. If the tool implementations are present on the camera system, as determined by querying a local data store, then the vision processing pipeline begins a new process or thread for each image analysis inspection tools 1440, where the new process or thread runs, as defined in the tool implementation, the software code, utilizes the runtime requirements or dependencies, and may reference and utilize the associated metadata file or files. If the tool implementations are not present on the camera system, the vision processing pipeline 1420 can choose to download them from a cloud server if possible, else the vision processing pipeline can return an error and indicate as such to the user. The user interface for the camera system additionally allows a user to download or otherwise load the tool implementation for a given tool which they have configured onto a camera system on which they would like to run the tool. Through this system, it is possible to allow developers (e.g. software engineers, end users, etc.) to create and distribute tools for use in the vision processing pipeline 1420 without those application developers needing to also be developers of the vision processing pipeline 1420, employees of the company or team which develops the vision processing pipeline 1420, or otherwise associated at all with any entity which maintains, develops or implements the vision processing pipeline 1420. As long as the image analysis inspection tools 1440 are containerized as specified and implement the expected API via the IPC mechanisms, they may be fully used and utilized in the vision processing pipeline 1420.
[0089] Additional examples of quality inspection tools 1440 can include: a machine learning model which uses convolutional neural network (CNN) techniques to provide anomaly detection analysis based on images which the user has labeled (referred to herein as Tool A), a machine learning model which uses CNN techniques to provide pass-fail analysis based on images which the user has labeled (referred to herein as Tool B), a machine learning model which uses CNN techniques to provide class presence/absence determinations based images which a user has labeled and then compare the detected classes to those that the user expects as configured in 1435 in order to create a pass/fail determination (referred to herein as Tool C), a machine-learning or computer-vision based optical character recognition (OCR) which is configured to detect text in in image and compare the scanned text to that which the user has specified in the inspection routine configuration 1435 to be expected (referred to herein as Tool D); a machine-learning or computer-vision based barcode detection algorithm which is configured to scan barcodes, QR codes, data matrices, or any form of 2-D code and compare the code scanned to that which a user has specified in the inspection routine configuration 1435 to be expected (referred to herein as Tool E); a computer-vision based algorithm which has been configured to check for the presence or absence of pixels of a particular color that passes or fails depending on the expected volume as specified by the user in the inspection routine configuration 1435 (referred to herein as Tool F).
[0090] Tool A, in addition to being able to identify anomalies, can indicate the location of the anomalies in the raw image without being trained on pixel-level labels. Pixel-level labels are time consuming to produce as a user must manually mark the pixels in which the defects occur for every image in the dataset. As opposed to most CNN-based approaches that use an encoder architecture that transforms a 2D input image into a 1D embedding, a fully convolutional network can be utilized. A fully convolutional network (sometimes referred to as FCN) is a neural network as used herein can be primarily composed of convolutional layers and no linear layers. This fully convolutional network maintains the natural 2D structure of an image with the output embedding of the network such that when distance comparisons between embeddings and a learned centroid embedding are calculated, the larger elements of the 2D distance array indicate the region in the raw image of the defect. In addition to this architecture, a contrastive loss function can be utilized that allows for training the network on only nominal data, while also leveraging anomalous data when it is available. The contrastive loss function trains the network in a manner where the network is encouraged to place nominal samples near the learned centroid embedding and anomalous samples far away. By using these approaches, an overlay image can be produced that indicates an anomaly score for each pixel in the raw image.
[0091] Tools B and C can utilize transfer learning and self-supervised learning where a CNN model trained on a separate task is adapted to the task at hand. This allows one to use much less data than if the model has been trained from scratch. Given this pretrained model, earlier layers can be reused and additional linear layers that are designed for the new task can be appended. In order to produce overlay visualizations, the regions in the raw image that contributed most to the prediction of the model can be identified.
[0092] For tools D and E, the overlay can indicate the region of the image that the text or barcode was found can be indicated using a bounding box.
[0093] Tool F can produce an overlay visualization based on the regions of the raw image that match the configured color range.
[0094]
[0095] The user settings in the inspection routine configuration 1425 can specify which of the results (i.e., published images including complementary information) from which image camera modules 130.sub.1 . . . n from are to be correlated. The inspection routine configuration 1425 can specify time-based offsets among or between image camera modules 130.sub.1 . . . n. These offsets can correspond to or otherwise take into account any expected differences in times among images generated by the image camera modules 130.sub.1 . . . n based on their physical positioning relative to each other (and the speed of the conveyance mechanism, etc.). If all of the image camera modules 130.sub.1 . . . n share a trigger, the offset value would be zero (or alternatively the user settings do not include any offsets). If it is known that a first image camera module 130.sub.1 is roughly 500 ms down the production line from a second image camera module 130.sub.2 then the offset would be set to 500 ms.
[0096] The correlation service 1480 can also specify a time window in which certain images (and their complementary information) are grouped together. This time window can be used to associate images (and their complementary information) which might not be precisely aligned given differences in ideal/predicted timestamps and the synchronization processes. The time window can be specified to be approximately 50% of the inter-item time on the line to allow for maximum synchronization error (clocks/timestamps not perfectly in sync) and minimum correlation error (as pictures can be grouped together incorrectly on items). The inter-item time can be calculated by taking the line rate, e.g. 200 items per minute, and dividing by 60 seconds per minute to get 3.33 items per second, and then inverting this number, i.e. 1/3.33 to get an inter-item time of 0.3 seconds. One default for the correlation window can be 150 ms for a line rate of 200 items per minute. A simple implementation of the correlation algorithm can then be defined as follows: image_i should be correlated with existing correlated images {image_1, . . . , image_N} if: abs(timestamp_i−avg(adjusted_timestamps_ij))=<window_i, where avg(adjusted_timestamps_ij)=Σ.sub.j=1.sup.N (timestamp.sub.j+offset.sub.ij)/N. This algorithm can be modified to ensure no more than 1 image from a given camera is correlated on the same item.
[0097]
[0098]
[0099] For example, a user rule can specify that if any AOI on image inspection module is assigned fail, then the object is deemed to have failed the inspection. With some rules, failure associated with a specific failure can cause the object to be deemed to have failed the inspection. With some rules, a certain number of AOIs need to have an associated failure for the object to be deemed to have failed the inspection. As another example, the rules can specify that if a certain of the image analysis inspection tools fails (e.g., barcode reader, etc.), then the object can be deemed to have failed the inspection. As another example, the rules can specify such that if at least one of the image analysis inspection tools is a pass, then the object is a pass. This can be useful, for example, when looking at multiple camera angles for a single barcode where the placement is inconsistent overall but consistent in that it will be detected in at least one of the camera angles.
[0100]
[0101]
[0102]
[0103]
[0104]
[0105]
[0106]
[0107]
[0108]
[0109]
[0110] In one example, a disk controller 2948 can interface with one or more optional disk drives to the system bus 2904. These disk drives can be external or internal floppy disk drives such as 2960, external or internal CD-ROM, CD-R, CD-RW or DVD, or solid state drives such as 2952, or external or internal hard drives 2956. As indicated previously, these various disk drives 2952, 2956, 2960 and disk controllers are optional devices. The system bus 2904 can also include at least one communication port 2920 to allow for communication with external devices either physically connected to the computing system or available externally through a wired or wireless network. In some cases, the at least one communication port 2920 includes or otherwise comprises a network interface.
[0111] To provide for interaction with a user, the subject matter described herein can be implemented on a computing device having a display device 2940 (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information obtained from the bus 2904 via a display interface 2914 to the user and an input device 2932 such as keyboard and/or a pointing device (e.g., a mouse or a trackball) and/or a touchscreen by which the user can provide input to the computer. Other kinds of input devices 2932 can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback by way of a microphone 2936, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The input device 2932 and the microphone 2936 can be coupled to and convey information via the bus 2904 by way of an input device interface 2928. Other computing devices, such as dedicated servers, can omit one or more of the display 2940 and display interface 2914, the input device 2932, the microphone 2936, and input device interface 2928.
[0112] One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
[0113] These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores.
[0114] In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it is used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” In addition, use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible.
[0115] The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims.