G06F11/3608

System for the global solution of an event-dependent multicriteria non-convex optimization problem
11568104 · 2023-01-31 · ·

A system for solving an event-dependent multicriteria optimization problem of at least one cyber-physical system, comprising a control device for controlling the at least one cyber-physical system, the control device controlling the cyber-physical system in dependence on a list of prioritized objectives by solving at least one event-dependent suboptimization problem is characterized in that each objective from the list of prioritized objectives is captured as an objective function, each objective function consisting of at least two parts, a first part of which relates to directly capturing the objective and a second part of which describes a condition under which each result of one of the preceding objectives of each of the preceding suboptimization problems is substantially not negatively affected.

STATIC SOURCE CODE ANALYSIS USING EXPLICIT FEEDBACK AND IMPLICIT FEEDBACK
20230236950 · 2023-07-27 ·

Techniques for performing an improved static code analysis are described. A computing device retrieves one or more source code files and metadata for each of the one or more source code files from storage components. The computing device identifies, using the model, one or more potential defects in a first source code file of the one or more source code files based at least in part on one or more of source code saved in the first source code file and metadata for the first source code file. The computing device receives both explicit feedback and implicit feedback for the one or more potential defects. The computing device updates the model with both the explicit feedback and the implicit feedback to develop an updated model.

VIDEO GAME TESTING AND AUTOMATION FRAMEWORK

An automated video game testing framework and method includes communicatively coupling an application programming interface (API) to an agent in a video game, where the video game includes a plurality of in-game objects that are native to the video game. The agent is managed as an in-game object of the video game. A test script is executed to control the agent, via the API, to induce gameplay and interrogate a behavior of a test object. The test object is identified from the plurality of in-game objects based on a query that specifies an object attribute of the test object.

Method and apparatus for processing test execution logs to detremine error locations and error types

A method of processing test execution logs to determine error location and source includes creating a set of training examples based on previously processed test execution logs, clustering the training examples into a set of clusters using an unsupervised learning process, and using training examples of each cluster to train a respective supervised learning process to label data where each generated cluster is used as a class/label to identify the type of errors in the test execution log. The labeled data is then processed by supervised learning processes, specifically a classification algorithm. Once the classification model is built it is used to predict the type of the errors in future/unseen test execution logs. In some embodiments, the unsupervised learning process is a density-based spatial clustering of applications with noise clustering application, and the supervised learning processes are random forest deep neural networks.

COMPOSITIONAL VERIFICATION OF EMBEDDED SOFTWARE SYSTEMS

A computer-implemented method for static testing a software system that is decomposed into software units connected by interfaces. The method comprises receiving context information for an interface, which includes at least one postcondition for the at least one output variable of a respective first software unit and/or a precondition for the input variable of a respective second software unit; receiving a selection of a third software unit in so that a substitute decomposition appertaining thereto of the software system into the third software unit and a complement of the third software unit is produced, the third software unit and the complement forming the software system and being connected via a substitute interface; selecting, based on the item of context information a postcondition per output variable of the complement; and testing whether the selected postcondition can be forward-propagated by the third software unit with regard to a formal verification.

Enhanced application performance framework

This document describes a framework for measuring and improving the performance of applications, such as distributed applications and web applications. In one aspect, a method includes performing a test on an application. The test includes executing the application on one or more computers and, while executing the application, simulating a set of workload scenarios for which performance of the application is measured during the test. While performing the test, a set of performance metrics that indicate performance of individual components involved in executing the application during the test is obtained. A knowledge graph is queried using the set of performance metrics. The knowledge graph links the individual components to corresponding performance metrics and defines a set of hotspot conditions that are each based on one or more of the corresponding performance metrics for the individual components. A given hotspot condition is detected based on the set of performance metrics.

Adaptive, speculative, agent-based workload generation

Load testing a service having a plurality of different states is provided. A multitude of simulated users accessing the service are divided into a plurality of cohorts. Simulated users within a given cohort share a similar personality type. A load test of the service is performed by applying a set of service requests from each respective cohort to the service. In response to a percentage of simulated users of each cohort encountering a particular state in the service, a user response is determined for the percentage of simulated users within each cohort at that particular state based on a probabilistic user behavior model corresponding to a personality type of each cohort such that user responses at that particular state are distributed in accordance with the probabilistic user behavior model. Distributed user responses at that particular state are applied to the load test in accordance with the probabilistic user behavior model.

Data model generation using generative adversarial networks

Methods for generating data models using a generative adversarial network can begin by receiving a data model generation request by a model optimizer from an interface. The model optimizer can provision computing resources with a data model. As a further step, a synthetic dataset for training the data model can be generated using a generative network of a generative adversarial network, the generative network trained to generate output data differing at least a predetermined amount from a reference dataset according to a similarity metric. The computing resources can train the data model using the synthetic dataset. The model optimizer can evaluate performance criteria of the data model and, based on the evaluation of the performance criteria of the data model, store the data model and metadata of the data model in a model storage. The data model can then be used to process production data.

Method, System, and Computer Program Product for Automatic Selection of Tests for Software System Regression Testing Using Machine Learning
20230222051 · 2023-07-13 ·

Provided is a computer-implemented method, system, and computer program product for automatic selection of tests for software system regression testing using machine learning including generating a test mapping including at least one test of a plurality of tests corresponding to a source file. The plurality of tests and the at least one source file are associated with a software repository. Further, determining a defective score for the at least one test based on historical test data of the at least one test, receiving a component criticality score and a defect definition corresponding to the source file, generating a key value corresponding to at least one test based on the defective score, component criticality score, and defect definition, determining a subset of tests of the plurality of tests based on the key value corresponding to the at least one test; and executing the subset of tests with the software repository.

Detecting performance regressions in software for controlling autonomous vehicles
11544173 · 2023-01-03 · ·

The disclosure relate to detecting performance regressions in software used to control autonomous vehicles. For instance, a simulation may be run using a first version of the software. While the simulation is running, CPU and memory usage by one or more functions of the first version of the software may be sampled. The sampled CPU and memory usage may be compared to CPU or memory usage by each of the one or more functions in a plurality of simulations each running a corresponding second version of the software. Based on the comparisons, an anomaly corresponding to a performance regression in the first version of the software relating to one of the one or more functions may be identified. In response to detecting the anomaly, the first version of the software and the one of the one or more functions may be flagged for review.