Patent classifications
G06F11/3692
Techniques for large-scale functional testing in cloud-computing environments
Techniques are disclosed for generating an execution plan for performing functional tests in a cloud-computing environment. Infrastructure resources and capabilities (e.g., system requirements) may be defined within an infrastructure object (e.g., a resource of a declarative infrastructure provisioner) that stores a code segment that implements the resource or capability. Metadata may be maintained that indicates what particular capabilities are applicable to each infrastructure resource. Using the metadata, the system can generate an execution plan by combining code segments for each resource with code segments defining each capability in accordance with the metadata. The execution plan may include programmatic instructions that, when executed, generate a set of test results. The system can execute instructions that cause the set of test results to be presented at a user device.
System and method for detecting errors in a task workflow from a video stream
A system for detecting errors in task workflows from a real time video feed records. The video feed that shows a plurality of steps being performed to accomplish a plurality of tasks through an automation process system. The system splits the video feed into a plurality of video recordings which are valid breakpoints determined through cognitive Machine Learning Engine, where each video recording shows a single task. For each task from among the plurality of tasks, the system determines whether the task fails and the exact point of failure for that task. If the system determines that the task fails, the system determines a particular step where the task fails. The system flags the particular step as a failed step. The system reports the flagged step for troubleshooting.
Method and device for testing a technical system
A method for testing a technical system. Tests are carried out with the aid of a simulation of the system. The tests are evaluated with respect to a fulfillment measure of a quantitative requirement on the system and different error measures of the simulation. On the basis of the fulfillment measure and each of the error measures, a classification of the tests is carried out as either reliable or unreliable case by case. A selection among the error measures is made on the basis of a number of the tests classified as reliable.
Auto-intrusive data pattern and test case generation for system validation
Techniques for auto-intrusive data pattern and test case generation for negative service testing are described. A test engine obtains negative test information specifying negative test input examples or schemas associated with tests that are expected to fail. A test generator generates multiple test cases based on the negative test information. A test execution orchestrator splits each test case up into actions that are inserted into queues, where workflow execution agents perform the tests by reading from the queues and interacting with services. The tests may also include adjusting a rate of transactions allowed between top-level services and/or downstream services. Results from the testing are analyzed by a test analysis engine and used to inform the services or the test originator of test cases where the expected failures did not arise.
DEFECT REPORTING IN APPLICATION TESTING
The present subject matter relates to defect reporting in application testing. In an implementation, a category of application testing is determined based on a testing instance of an application. The category of application testing is indicative of an aspect of the application, being tested. A list of previously reported defects associated with the determined category of application testing is displayed in a display layer over the testing instance of the application. A first user-input indicative of one of acceptance and rejection of a previously reported defect, from the list, with respect to the testing instance of the application is received. The first user-input is aggregated with previous user-inputs indicative of one of acceptance and rejection of the previously reported defect. It is determined whether the previously reported defect is irrelevant to the testing instance of the application based on the aggregation.
METHOD AND A SYSTEM FOR AUTOMATICALLY IDENTIFYING VIOLATIONS IN ONE OR MORE TEST CASES
The present disclosure is related in general to software testing and a method and a system for automatically identifying violation in the test cases. A test case validation system categorizes the test cases into event-based test cases and binary test cases. Further, a Part-Of-Speech (POS) pattern is detected in the one or more test cases based on POS tags assigned to each of the tokens in test cases. Thereafter, comparison of the detected POS pattern and the one or more tokens with predefined POS patterns and predefined tokens identifies violations in the one or more test cases if any, using pattern matching and Natural Language Processing (NLP). The predefined POS patterns and tokens used for comparison are filtered based on category of the test case thus accelerating the process of the violation identification. The test case validation system is capable of accurately identifying more than one type of violations simultaneously.
Cloud Assisted Behavioral Automated Testing
A computer readable storage medium, system and method for improving automated testing systems to include a first and second behavioral data. The first behavioral data is collected periodically and the second behavioral data is collected in real time. The receipt of the first behavioral data and a second behavioral data are followed by the receipt of a system configuration template. A test case is updated based on the first and second behavioral data, and an automated test environment is reconfigured based on the first behavioral data, second behavioral data, and the system configuration template. The test executes in the automated test environment producing a test result.
Self-healing hybrid element identification logic
A system and method for receiving, using one or more processors, a first testing identifier associated with a first element of an application under test; receiving, using the one or more processors, a second testing identifier associated with the first element of an application under test; evaluating, using the one or more processors, the first testing identifier; determining, using the one or more processors, a failure of the first testing identifier to identify an element in the application under test; evaluating, using the one or more processors, the second testing identifier; identifying, using the one or more processors, the first element in the application under test based on the second testing identifier; and repairing, using the one or more processors, the first testing identifier to identify the first element in the application under test.
Method for Testing a Graphical Interface and Corresponding Test System
This test method for validating a specification of a graphical interface consists of developing a scenario file corresponding to the validation test to be performed. The scenario file includes a plurality of instructions, in a natural programming language, each instruction including a function, parameters and an expected state of the graphical interface following the application of the function. The test is automatically performed by interpreting the scenario file so as to generate commands intended for an engine capable of interacting with the graphical interface and monitoring the evolution of its current state, and then analyzing a result file associating each instruction of the scenario file with a result corresponding to the comparison of the current state of the graphical interface following the application of the corresponding command with the expected state.
Method for testing a microservice application
Provided is a method for testing a microservice application with at least one microservice with at least one application programming interface, including: reading characteristic data of the application programming interface of the microservice of the microservice application and ascertaining at least one endpoint of the application programming interface; automatically generating an execution script on the basis of the characteristic data of the application programming interface; automatically generating a test infrastructure, wherein the test infrastructure includes at least one client entity; executing the execution script and transmitting the data query of the execution script by the client entity to the application programming interface of the microservice and receiving corresponding response data of the microservice by the client entity; and ascertaining the transfer characteristic by the client entity.