G06F11/368

DEVICE TESTING ARRANGEMENT
20230050723 · 2023-02-16 · ·

An arrangement for automated testing of mobile devices comprising a learning arrangement for learning how to use test devices that do not match with an earlier already defined test case pattern. In the arrangement the learning arrangement generates instructions for performing a set of tasks. The tasks are then executed in the mobile device being tested. The mobile device provides feedback in form of error/success messages, screenshots, source code, return values and similar. Based on the feedback and earlier accumulated information the learning entity can generate a new set of instructions in order to execute the set of tasks successfully.

SYSTEMS AND METHODS FOR AUTOMATED TEST DATA MICROSERVICES
20230041477 · 2023-02-09 ·

Systems and methods for automated test data microservices are provided. Test versions of software (such as an Application Programming Interface (API)) may be configured to automatically generate test data and to call a microservice to manage the test data. The microservice may automatically add and remove the test data from an operational data store to facilitate the testing process and to automatically perform setup and teardown stages of the testing process.

Test package analyzer
11550703 · 2023-01-10 · ·

A system and a method for recommending a modification to a test package for a software under test. A release note package associated to a feature of a software is received. The release note package is analysed in real time using machine learning based models. Further, a keyword is extracted from the release note package using a keyword extraction technique. The keyword corresponds to the feature of the software. The keyword is compared with nomenclatures present in a test package using a pattern matching technique. The test package is associated to the feature of the software. Finally, a modification to the test package is recommended based on the comparison. The modification comprises addition, deletion, or updating an existing element of the test package. It may he noted that the modification is recommended using an Artificial Intelligence (AI) technique.

DEFECT REPORTING IN APPLICATION TESTING
20180004648 · 2018-01-04 ·

The present subject matter relates to defect reporting in application testing. In an implementation, a category of application testing is determined based on a testing instance of an application. The category of application testing is indicative of an aspect of the application, being tested. A list of previously reported defects associated with the determined category of application testing is displayed in a display layer over the testing instance of the application. A first user-input indicative of one of acceptance and rejection of a previously reported defect, from the list, with respect to the testing instance of the application is received. The first user-input is aggregated with previous user-inputs indicative of one of acceptance and rejection of the previously reported defect. It is determined whether the previously reported defect is irrelevant to the testing instance of the application based on the aggregation.

Cloud Assisted Behavioral Automated Testing
20180007175 · 2018-01-04 ·

A computer readable storage medium, system and method for improving automated testing systems to include a first and second behavioral data. The first behavioral data is collected periodically and the second behavioral data is collected in real time. The receipt of the first behavioral data and a second behavioral data are followed by the receipt of a system configuration template. A test case is updated based on the first and second behavioral data, and an automated test environment is reconfigured based on the first behavioral data, second behavioral data, and the system configuration template. The test executes in the automated test environment producing a test result.

Systems and method for testing computing environments

Systems and methods are disclosed herein for improving data migration operations including testing and setup of computing environments. In one example, the method may include receiving data for one or more application programming interfaces (APIs). The method may further include generating one or more tests to test the one or more APIs in a first computing environment, testing the APIs, storing the results in a database, and performing a change data capture operation. The method may further include augmenting the one or more tests with the CDC data to generate an updated test. The method may further include testing, using the updated test, a second set of the one or more APIs and comparing the test results. The method may also include outputting a confidence score indicating a correlation between the first environment and the second environment.

OBJECTIVE-DRIVEN TEST CASES AND SUITES
20230236956 · 2023-07-27 ·

An objective-driven test case generation system includes an atomic test case module, a test data module, a tailoring module and a functional test case module. The atomic test case module generates a plurality of atomic test cases and stores the atomic test cases in an atomic test case library. The test data module receives a business model, determines one or more test steps from the input business model, and generates test data including the test steps. The tailoring module performs a linking operation to link the test steps included in the test data with one or more atomic test cases included in the atomic test case library to generate linked test case data. The functional test case module generates an objective-driven functional test case based on the linked test case data.

Bypassing generation of non-repeatable parameters during software testing

A service testing system is disclosed to enable consistent replay of stateful requests on a service whose output depends on the service's execution state prior to the requests. In embodiments, the service implements a compute engine that executes service requests and a storage subsystem that maintains execution states during the execution of stateful requests. When a stateful request is received during testing, the storage subsystem creates an in-memory test copy of the execution state to support execution of the request, and provides the test copy to the compute engine. In embodiments, the storage subsystem will create a separate instance of execution state for each individual test run. The disclosed techniques enable mock execution states to be easily created for testing of stateful requests, in a manner that is transparent to the compute engine and does not impact production execution data maintained by the service.

COMPARING THE PERFORMANCE OF MULTIPLE APPLICATION VERSIONS
20230023876 · 2023-01-26 · ·

Comparing the performance of multiple versions or branches/paths of an application (e.g., a web service or application) may be conducted within a suitable computing environment. Such an environment may be virtual in nature, cloud-based, or server-based, and is hosted with tools for simultaneously (or nearly simultaneously) executing multiple containers or other code collections with the same or similar operating conditions (e.g., network congestion, resource contention, memory management schemes). By arranging the performance test of different application versions in different sequences executed in parallel in separate containers, fair comparisons of the tested applications will be obtained. Testing sequences may be executed multiple times, and metrics are collected during each execution. Afterward, the results for each metric for each code version are aggregated and displayed to indicate their relative performance quantitatively and/or qualitatively.

RANKING TESTS BASED ON CODE CHANGE AND COVERAGE
20230025441 · 2023-01-26 ·

A system can identify a file comprising computer-executable instructions, wherein the file has been modified since the file was last transformed into a computer-executable program on which a group of tests was performed. The system can, for respective tests, determine respective line coverage ratios, respective function coverage ratios, and respective branch coverage ratios. The system can select an updated group of tests from the group of tests based on the respective line ratios, the respective function ratios, and the respective branch ratios, the updated group of tests comprising a subgroup of the group of tests. The system can create an updated computer-executable program from the file. The system can test the updated computer-executable program with the updated group of tests.