Patent classifications
G06F11/3692
Third-party testing platform
Systems and methods for conducting a test on a third-party testing platform are provided. A networked system causes presentation of a setup user interface to a third-party user, whereby the setup user interface includes a field for indicating an attribute of a publication to be tested. The networked system receives, via the setup user interface, an indication of the attribute, a subject to be tested, and one or more test parameters. The networked system applies the attribute change to a first version of the publication to generate a second version of the publication. The first version is presented to a first subset of potential users and the second version is presented to a second subset of potential users. Interactions with both the first version and the second version are monitored and analyzed to determine results of the test. The results are then presented to the third-party user.
VALIDATION OF A MACHINE LEARNING MODEL
Systems, methods, and computer-readable media are disclosed for validating a machine learning model. In one aspect, a machine learning model validation system can receive a test machine learning model, analyze an output of the test machine learning model, determine a degree of similarity between the test machine learning model and one or more machine learning models stored in a database based on the output of the test machine learning model, and determining whether the test machine learning model complies with a set of validation rules based on the degree of the similarity with respect to one or more thresholds.
VIDEO GAME TESTING AND AUTOMATION FRAMEWORK
An automated video game testing framework and method includes communicatively coupling an application programming interface (API) to an agent in a video game, where the video game includes a plurality of in-game objects that are native to the video game. The agent is managed as an in-game object of the video game. A test script is executed to control the agent, via the API, to induce gameplay and interrogate a behavior of a test object. The test object is identified from the plurality of in-game objects based on a query that specifies an object attribute of the test object.
Automated classification of defective code from bug tracking tool data
Systems and methods are described for automated classification of defective code from bug tracking tool data. An example method includes receiving a plurality of datasets representing a plurality of bug reports from a bug tracking application. Each dataset may be generated by vectorizing and clustering a source code associated with a respective bug report represented by the dataset. Each dataset may comprise a plurality of classes. At least one class of each dataset may indicate at least one known bug. For each dataset of the plurality of datasets, a respective supervised feature vector may be generated. Each supervised feature vector may be associated with an index of the at least one class with the at least one known bug. Using the supervised feature vectors, a classification model is trained to detect a new bug presence in a new source code.
Systems and methods for remediation of software configuration
Systems and methods for remediation of software configurations are disclosed. The system may store a plurality of configuration policies in a compliance repository. The system may receive trigger data including at least one compliance error and indicating a software instance operating on a cloud service is out of compliance. The system may compare the at least one compliance error with the plurality of configuration policies. When at least one compliance error matches at least one configuration policy, the system may identify a software configuration file and apply the matching configuration policy to the software configuration file to remediate the software instance. When the at least one compliance error does not match at least one configuration policy, the system may generate a new configuration policy, validate the new configuration policy, and apply the new configuration policy to the software configuration file to remediate the software instance.
Method and apparatus for processing test execution logs to detremine error locations and error types
A method of processing test execution logs to determine error location and source includes creating a set of training examples based on previously processed test execution logs, clustering the training examples into a set of clusters using an unsupervised learning process, and using training examples of each cluster to train a respective supervised learning process to label data where each generated cluster is used as a class/label to identify the type of errors in the test execution log. The labeled data is then processed by supervised learning processes, specifically a classification algorithm. Once the classification model is built it is used to predict the type of the errors in future/unseen test execution logs. In some embodiments, the unsupervised learning process is a density-based spatial clustering of applications with noise clustering application, and the supervised learning processes are random forest deep neural networks.
Bypassing generation of non-repeatable parameters during software testing
A service testing system is disclosed to enable consistent replay of stateful requests on a service whose output depends on the service's execution state prior to the requests. In embodiments, the service implements a compute engine that executes service requests and a storage subsystem that maintains execution states during the execution of stateful requests. When a stateful request is received during testing, the storage subsystem creates an in-memory test copy of the execution state to support execution of the request, and provides the test copy to the compute engine. In embodiments, the storage subsystem will create a separate instance of execution state for each individual test run. The disclosed techniques enable mock execution states to be easily created for testing of stateful requests, in a manner that is transparent to the compute engine and does not impact production execution data maintained by the service.
Automated fault injection testing
An automated fault injection testing and analysis approach drives fault injection into a processor driven instruction sequence to quantify and define susceptibility to external fault injections for manipulating instruction execution and control flow of a set of computer instructions. A fault injection such as a voltage or electromagnetic pulse directed at predetermined locations on a processor (Central Processing Unit, or CPU) alters a result of a processor instruction to change values or execution paths. One or more quantified injections define an injection chain that causes a predictable or repeatable deviant result from an expected execution path through the code executed by the processor. Based on accumulation of fault injections and results, a repeatable injection chain and probability identifies an external action taken on a processing device to cause unexpected results that differ from an expected execution of a program or set of computer instructions.
DATA AUGMENTATION BASED ON FAILURE CASES
A computer-implemented method is provided for data augmentation. The method includes receiving a set of different base models already pretrained and a set of different test cases. The method further includes collecting a plurality of prediction results of the set of different test cases from the set of different base models. The method also includes identifying a test case as a candidate for the data augmentation based on a number of models in the set of different base models which fail to solve the test case. The method additionally includes augmenting, by a processor device, the identified test case with additional data to form an augmented training dataset. The method further includes retraining at least some of the different base models with the augmented training dataset.
Homomorphic Encryption-Based Testing Computing System
A homomorphic encryption-based testing computing system provides a risk-based, automated, one-directional push of production data through a homomorphic encryption tool and distributes the encrypted data to use in testing of applications. Data elements and test requirements are considered when automatically selecting a homomorphic encryption algorithm. A decisioning component selects an algorithm to use to homomorphically encrypt the data set and a push mechanism performs one or both of the homomorphic encryption and distribution of the encrypted data set to at least one intended host. Once delivered, the testing software and/or testing procedures proceed using the encrypted data set, where results of the testing may be stored in a data store. A validation mechanism may validate the test data against production data and communicates whether testing was successful.