G06F11/3676

AUTOMATIC TEST GENERATION FOR HIGHLY COMPLEX EXISTING SOFTWARE
20230087569 · 2023-03-23 ·

Techniques are disclosed for the generation of automatic software tests for complex software systems, such as operating systems (OS) and/or systems that may be implemented as part of an autonomous vehicle (AV) or advanced driving assistance system (ADAS). The technique generates tests using a tool, such as a stressor, which stresses a particular system under test in multiple ways. For every run of the stressor, the functions of the system that are invoked during the test are captured. A check is then performed to determine if this set of functions corresponds to one of the test scenarios for which testing is desired. If the set of functions that were invoked matches the set of functions that defines the test, then the configuration of the stressor is stored, and this stressor configuration is considered as the test for a particular scenario.

System testing infrastructure for analyzing and preventing soft failure in active environment

A method for testing a system under test (SUT) in an active environment includes receiving, by a testing system, a code path of the SUT that causes a soft failure in the active environment. The soft failure occurs in the active environment during execution of the SUT based at least on a parameter of the active environment. The method further includes generating, by the testing system, multiple tests for testing the SUT, the tests generated based on a coverage model of the SUT, wherein the coverage model uses several attributes. The method further includes selecting, by the testing system, from the generated tests, a set of tests that are associated with the code path. The method further includes executing, by the testing system, only the set of tests that are selected on the SUT to analyze a cause of the soft failure.

SYSTEMS AND METHOD FOR ANALYZING SOFTWARE AND TESTING INTEGRATION
20220342663 · 2022-10-27 ·

An assessment system can generate a software quality value based on testing results and analysis of a multitude of factors that impact a readiness evaluation. For example, the system generates a software quality score (e.g., an Applause Quality Score “AQS”) that enables development teams to understand the level of quality they are achieving for a given release and build-over-build. In various examples, the system generates a data-driven score to enable development teams or quality assurance teams to make decisions for when a build is ready for release. In further embodiments, the system can integrate user interfaces that present a software quality score in a user dashboard that is linked to version control systems. On review and acceptance of the score, a user can trigger the release of their new code or product.

Automatic evaluation of test code quality
11481311 · 2022-10-25 · ·

Techniques and solutions are described for automatically evaluating test code. In one technique, test code quality is evaluated by comparing assertions in test code with output values in target code tested by the test code. Output values that are not associated with assertions, or an insufficient number or variety of assertions can indicate that a test can be improved. In another technique, test quality is assessed by dynamically changing target code or test data used with a test. Room for test improvement can be indicated if test code provides a passing result despite changes to test data used with the test or changes to target code executed in conducting the test.

Source code test consolidation
11474932 · 2022-10-18 · ·

A method includes identifying a set of tests for a source code, analyzing the set of tests to identify overlapping blocks of the source code that are to be tested by each of the set of tests, merging a subset of the tests that include the overlapping blocks of the source code to create a merged test, and causing the merged test to be executed to test the source code. In an implementation, code coverage results are used when analyzing the set of tests to identify overlapping blocks of the source code.

SYSTEMS AND METHODS FOR CAPTURING TEST EXECUTION AND COMMUNICATION
20230130027 · 2023-04-27 ·

A test automation system is provided that implements a DataTap service to capture underlying communications in a test environment or on a test platform. In some embodiments, the DataTap service can be configured as a passive data capture element. In various embodiments, passive data capture allows the system to record/mirror all data traffic flows associated with test execution without affecting the operation or execution of the tests. Some alternatives include functions to modify test traffic. In some examples, the system is configured to replay tests, modify test execution, modify test parameters, among other options. Various embodiments of the system allow new and updated feature sets to be integrated into existing test platforms without any changes in code, tests, or operation. Further, updates to test services become simple plug-in based features that, for example, provide assurance of zero impact on existing implementation.

Test cycle optimization using contextual association mapping

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for test cycle optimization using contextual association mapping. In one aspect, a method includes obtaining an artifact that includes a collection of reference items, where each reference item includes a sequence of words, generating candidate tags from each of the reference items based on the sequences of words in the reference items, selecting a subset of the candidate tags as context tags based on an amount that the candidate tags appear in the reference items, obtaining a sample item that includes a sequence of words, identifying a subset of the context tags in the sequence of words in the sample item, and classifying a subset of the reference items as contextually similar to the sample item based the context tags that were identified.

Stress test impact isolation and mapping

A method for testing a system under test (SUT) in an active environment to identify cause of a soft failure includes recording a first difference vector by executing a set of test cases on a baseline system and monitoring performance parameters of the baseline system before and after executing the test cases. Each performance record represents differences in the performance parameters of the baseline system from before and after the execution of a corresponding test case. The method further includes, similarly, recording a second difference vector by executing the test cases on the SUT and monitoring performance parameters of the SUT before and after executing the test cases. The method further includes identifying an outlier performance record from the second difference vector by comparing the difference vectors and further, determining a root cause of the soft failure by analyzing a test case corresponding to the outlier.

Framework for UI automation based on graph recognition technology and related methods
11599449 · 2023-03-07 · ·

A GUI testing device may be configured to execute a testing state machine for interacting with a software application to generate an initial screen of a GUI. The GUI testing device may be configured to determine a current state in the testing state machine based upon a matching trigger target in the initial screen to a given state. The current state may include an operation, and the operation may associate with a trigger target to operate on. The trigger may include a source state, a destination state, and a trigger target. The operation may include a user input operation, and an operation trigger target. The GUI testing device may be configured to perform the operation on the matching trigger target in the initial screen to generate a next screen of the GUI, and advance from the current state to a next state based upon the trigger.

Information processing system, information processing method, and non-transitory recording medium
11599401 · 2023-03-07 · ·

An information processing system, an information processing method, and a non-transitory recording medium. The information processing system receives from a device, a number of times of writing operations to one or more memories included in the device and counter information of the device, determines whether there is a malfunction or a probability of malfunction based on the number of times of writing operations received from the device, and identifies software that causes or is likely to cause the malfunction based on the counter information in response to determination of the malfunction, or the probability of malfunction.