Patent classifications
G06F11/368
Test relevancy prediction for code changes
In some examples, test relevancy prediction for code changes may include ascertaining files for a commit for a build, and for each test of a plurality of tests, determining a score based on a weight assigned to a file of the ascertained files. Test relevancy prediction for code changes may further include ordering each test of the plurality of tests according to the determined score, and identifying, based on the ordering of each test of the plurality of tests, tests from the plurality of tests for which the score exceeds a specified threshold. The identified tests may represent tests that are to be applied to the build.
User mode event handling
A method includes asserting a field of an event flag mask register configured to inhibit an event handler. The method also includes, responsive to an event that corresponds to the field of the event flag mask register being triggered: asserting a field of an event flag register associated with the event; and based the field in the event flag register being asserted, taking an action by a task being executed by the data processor core.
Using graphical image analysis for identifying image objects
An image of a graphical user interface is captured. For example, a screen shot of a browser display is captured. Text syntax is executed that contains one or more parameters for identifying a graphical object. For example, the text syntax may identify a rectangle that contains the text “OK” where the text is red. Based on the text syntax, a graphical object is identified in the image of the graphical user interface. Information is returned that identifies how to access the graphical object in the graphical user interface. For example, coordinates of the graphical object are identified. This information can then be used in a test script using existing programming languages to test the graphical user interface. For example, the coordinates may be used to click on the OK button.
TESTING SYSTEMS AND METHODS
A computer implemented method, system and computing device for identifying a test option associated with an application for a user is described. The method comprises selecting a predefined test indicated by a test identifier associated with the requested application, the test having more than one test option associated therewith, generating a hash of the test identifier and a user identifier associated with the user, processing the hash to generate an index, comparing said index with a distribution of numbers divided into multiple ranges, each range being associated with a test option, and selecting a test option associated with the range into which the index falls. The applications may be computer gaming applications.
Techniques for determining client-side effects of server-side behavior using canary analysis
In one embodiment of the present invention, a sticky canary router routes each request associated with a service to either a canary cluster of servers that implement a modification to the service or a baseline cluster of servers that do not implement the modification. The sticky canary router implements a mapping algorithm that determines the routing of each request based on a current time, a time window for the routing, and a characteristic of the request. Notably, the mapping algorithm may be implemented such that, for time segments with duration equal to the time window, the sticky canary router routes all requests received from a particular device in a consistent fashion—either to the canary cluster or to a baseline cluster. Configured thusly, the sticky canary router enables the analysis of approximately full sections of client interactions with the canary servers, thereby facilitating identification of client-side effects of the changes.
Test execution optimizer for test automation
The systems and methods that determine tests that may be executed in parallel during regression testing of an analytics application are provided. Multiple tests that test functions of the analytics application are accessed from a test automation suite. For each test, data sources that provide data to the analytics application during the test are identified. The tests are aggregated into temporary groups according to the identified data sources. The test groups are generated from the temporary groups such that each test group comprises tests that are associated with non-overlapping data sources. The regression testing is performed on the application by executing the test groups in parallel.
User interface that integrates plural client portals in plural user interface portions through sharing of one or more log records
A computer-implemented method for integrating client portals of underlying data processing applications through a shared log record, including: storing one or more log records that are each shared by the process management application and the version control application; receiving instructions through a user interface that integrates, through the shared one or more log records, the process management client portal with the version control client portal; in response to the receiving of the instructions, executing the received instructions, the executing of the received instructions including: selecting, by the version control application, a particular version of the rule from the multiple versions of the rule stored in the system storage; and transitioning, by the process management application, the particular version of the rule from the first state of the plurality of states to the second, different state of the plurality of states.
ARCHITECTURE, METHOD AND SYSTEM FOR LIVE TESTING IN A PRODUCTION ENVIRONMENT
There is provided an architecture, methods and a system for live testing in a production environment. The architecture comprises a platform independent Test Planner for generating a test package in response to receiving an event. Generating a test package comprises selecting test goals, generating a test suite and generating a test plan. The architecture comprises a platform dependent Test Execution Framework (TEF) for executing the test package in an environment serving live traffic. Executing the test package comprises initializing the test plan, starting the test plan and reporting the successful completion of the test plan, reporting the suspension of the test plan and waiting for further instructions, or reporting a failure of the test plan and executing a corresponding contingency plan.
SYSTEM AND COMPUTER-IMPLEMENTED METHOD FOR TESTING AN APPLICATION USING AN AUTOMATION BOT
A system and a method for performing a test of an application using an automation bot are provided. The method comprises accessing the application to be tested. The method comprises executing the test of the application using the automation bot. The automation bot is configured to interact with one or more other applications. The one or more other applications are different from the application. The method comprises determining one or more test results of the application based on the execution of the test. Further, the method comprises generating a notification indicative of the determined one or more test results.
ARTIFICIAL INTELLIGENCE-BASED AUTONOMOUS CONTINUOUS TESTING PLATFORM
Obtaining a configuration hook. Obtaining a base configuration of a remote system using the configuration hook. Obtaining a system-specific model associated with the configuration hook and the remote system. Obtaining one or more pre-built test accelerators. Obtaining a deep machine learning model. Generating, based on the base configuration, the system-specific model, the one or more pre-built test accelerators and the deep machine learning model, a custom configuration model. Generating a plurality of user journeys to be autonomously tested. Generating, based on the custom configuration model and the plurality of user journeys to be autonomously tested, a plurality of autonomous test scripts. Autonomously pre-configuring at run-time the plurality of autonomous test scripts. Autonomously executing the plurality of autonomous test scripts. Generating, based on the autonomously executed plurality of autonomous test scripts, one or more autonomous test reports. Presenting the one or more autonomous test reports.