Patent classifications
G06F11/3684
REST API Validation
Embodiments validate representational state transfer (“REST”) application program interfaces (“API”). Embodiments receive a REST API specification that provides information for a plurality of REST APIs and parse the REST API specification to extract, for each REST API, a corresponding Uniform Resource Locator (“URL”), and corresponding parameter names response codes and payloads. Embodiments convert the parsed REST API specification into a converted text file, the converting including parameter constraints and parameter default values. Embodiments then generate all possible combinations of test data for each REST API from the converted text file and perform one or more test operations on each of the combinations of test data.
INITIALIZE OPTIMIZED PARAMETER IN DATA PROCESSING SYSTEM
An approach is provided in which the approach loads a machine learning model and a set of test case statistical data into a user system. The set of test case statistical data is based on a set of test cases corresponding to the machine learning model and includes a plurality of input parameter sets and a corresponding set of output quality measurements. The approach compares user data on the user system against the set of test case statistical data and identifies one of the plurality of input parameter sets to optimize the machine learning model based on the set of output quality measurements. The approach generates an optimized machine learning model using the machine learning model and the identified input parameter set at the user system.
DEVICE TESTING ARRANGEMENT
An arrangement for automated testing of mobile devices comprising a learning arrangement for learning how to use test devices that do not match with an earlier already defined test case pattern. In the arrangement the learning arrangement generates instructions for performing a set of tasks. The tasks are then executed in the mobile device being tested. The mobile device provides feedback in form of error/success messages, screenshots, source code, return values and similar. Based on the feedback and earlier accumulated information the learning entity can generate a new set of instructions in order to execute the set of tasks successfully.
VERIFICATION OF CONTROL COUPLING AND DATA COUPLING ANALYSIS IN SOFTWARE CODE
Methods and systems for verifying control coupling analysis in testing of software code include: selecting a source file to be tested, the source file having source code, the source file selected from a system set including a plurality of source files from one or more nodes in a system; identifying one or more control couples within the source file by performing static analysis on the source code of the source file; defining one or more test runs of the software code, the one or more test runs including one or more of the identified control couples, and the one or more test runs using dynamic analysis; executing the one or more defined test runs; identifying control coupling coverage of the source file based on the dynamic analysis; and generating a control coupling report based on the identified control coupling coverage of the source file.
Smart test case generator
Embodiments provide systems, methods, and computer-readable storage media for automated and objective testing of applications or processes. Graphical representations of the application may be analyzed to derive attribute data and identify flows (e.g., possible processing paths that may be accessed during utilization of the application by a user). Test cases may be automatically generated based on the attribute data and the identified flows. Additionally, testing scripts for testing the portions of the application corresponding to each identified flow may be generated using machine learning logic. Once generated, the testing scripts may be executed against the application to test different portions of the application functionality (or processes). Execution of the testing scripts may be monitored to generate feedback used to train the machine learning logic. Reports may be generated based on the monitoring and provided to users to enable the users to resolve any errors encountered during the testing.
VEHICLE OPERATION SAFETY MODEL TEST SYSTEM
System and techniques for test scenario verification, for a simulation of an autonomous vehicle safety action, are described. In an example, measuring performance of a test scenario used in testing an autonomous driving safety requirement includes: defining a test environment for a test scenario that tests compliance with a safety requirement including a minimum safe distance requirement; identifying test procedures to use in the test scenario that define actions for testing the minimum safe distance requirement; identifying test parameters to use with the identified test procedures, such as velocity, amount of braking, timing of braking, and rate of acceleration or deceleration; and creating the test scenario for use in an autonomous driving test simulator. Use of the test scenario includes applying the identified test procedures and the identified test parameters to identify a response of a test vehicle to the minimum safe distance requirement.
NETWORK SERVICE MANAGEMENT SYSTEM AND NETWORK SERVICE MANAGEMENT METHOD
A CI/CD assist device accepts configuration data and test data collectively from a terminal (a vendor terminal) of a provider that provides a network service to a customer. The configuration data specifies a functional unit required to provide the network service, and the test data specifies test content for the network service or the functional unit. Each of an NOS, an NOS, and an NOS automatically builds the functional unit specified by the configuration data that is accepted by the CI/CD assist device. Each of a test device, a test device, and a test device automatically conducts a test on the network service or the functional unit based on the test data. The network service and the functional unit are built in each environment.
SYSTEMS AND METHODS FOR AUTOMATED TEST DATA MICROSERVICES
Systems and methods for automated test data microservices are provided. Test versions of software (such as an Application Programming Interface (API)) may be configured to automatically generate test data and to call a microservice to manage the test data. The microservice may automatically add and remove the test data from an operational data store to facilitate the testing process and to automatically perform setup and teardown stages of the testing process.
SYSTEMS AND METHODS FOR TESTING COMPONENTS OR SCENARIOS WITH EXECUTION HISTORY
Systems and methods for testing components or scenarios with execution history are disclosed. A method may include: receiving, at a testing interface and from an application or program executed by a user electronic device, an identification of a test and one or more data layers of a plurality of data layers in pod to test, the plurality of data layers including a data collection layer, a data ingestion layer, a data messaging layer, a data enrichment layer, and a data connect layer; receiving, by the testing interface, a selection of testing parameters or values for the identified test; retrieving, by the testing interface, the identified test; executing, by the testing interface, the identified test on the identified one or more data layers using the selected testing parameters or values; retrieving, by testing interface, results of the execution of the test; and outputting, by the testing interface, the results.
Method, electronic device and storage medium for testing autonomous driving system
A method, an electronic device and a computer-readable storage medium for testing an autonomous driving system which relate to the technical field of autonomous driving are proposed. An embodiment for testing the autonomous driving system includes: obtaining scenario description information of a testing scenario; analyzing the scenario description information, and determining a scenario risk, a scenario probability and a scenario complexity corresponding to the testing scenario; obtaining a scenario weight of the testing scenario according to the scenario risk, scenario probability and scenario complexity; determining a test period corresponding to the scenario weight, where the test period is used for the autonomous driving system being tested in the testing scenario. The technical solution may reduce the testing pressure of the autonomous driving system and improve the testing efficiency of the autonomous driving system.