Patent classifications
G06F11/3616
SYSTEM AND METHOD FOR BIAS EVALUATION SCANNING AND MATURITY MODEL
A system and method for automatic coding out biases in applications, systems, and processes are disclosed. A processor operatively connected to a memory via a communication interface applies an intake process based on received inventory data to applications, systems, and processes and implements a machine learning model in response to applying the intake process. The processor also identifies areas of the potential bias data within the applications, systems, and processes by utilizing the machine learning model based on analyzing response data received during the intake process; generates output data that includes bias data and exceptions data identified for the applications, systems, and processes; and mitigates the bias data and exceptions data in response to the output data by implementing a mitigation process.
Automation system and method
A computer-implemented method, computer program product and computing system for receiving a complex task; processing the complex task to define a plurality of discrete tasks each having a discrete goal; executing the plurality of discrete tasks on a plurality of machine-accessible public computing platforms; determining if any of the plurality of discrete tasks failed to achieve its discrete goal; and if a specific discrete task failed to achieve its discrete goal, defining a substitute discrete task having a substitute discrete goal.
STATIC SOURCE CODE ANALYSIS USING EXPLICIT FEEDBACK AND IMPLICIT FEEDBACK
Techniques for performing an improved static code analysis are described. A computing device retrieves one or more source code files and metadata for each of the one or more source code files from storage components. The computing device identifies, using the model, one or more potential defects in a first source code file of the one or more source code files based at least in part on one or more of source code saved in the first source code file and metadata for the first source code file. The computing device receives both explicit feedback and implicit feedback for the one or more potential defects. The computing device updates the model with both the explicit feedback and the implicit feedback to develop an updated model.
Enhanced application performance framework
This document describes a framework for measuring and improving the performance of applications, such as distributed applications and web applications. In one aspect, a method includes performing a test on an application. The test includes executing the application on one or more computers and, while executing the application, simulating a set of workload scenarios for which performance of the application is measured during the test. While performing the test, a set of performance metrics that indicate performance of individual components involved in executing the application during the test is obtained. A knowledge graph is queried using the set of performance metrics. The knowledge graph links the individual components to corresponding performance metrics and defines a set of hotspot conditions that are each based on one or more of the corresponding performance metrics for the individual components. A given hotspot condition is detected based on the set of performance metrics.
MULTI-PASS PERFORMANCE PROFILING
Apparatuses, systems, and techniques to collect compute performance information. In at least one embodiment, an API is performed to cause two or more portions of at least one software program to be concurrently performed a plurality of times in order to generate one or more performance metrics.
MULTI-TENANT JAVA AGENT INSTRUMENTATION SYSTEM
In one embodiment, a device launches a core agent for a Java application. The core agent loads a first tenant and a second tenant, each tenant having its own isolated class loader. The device instruments, via the core agent and by each tenant, the Java application to capture data regarding execution of the Java application. The device provides the captured data to a user interface.
Code development management system
A system includes one or more code development servers operable to monitor development of code files and one or more code execution servers operable to execute the code files. One or more code analysis tools of the system include instructions that when executed by at least one processing device result in collecting code development data associated with development of the code files on a per user basis and determining a predicted code execution performance score of one or more selected files of the code files based on the code development data. One or more resources of the one or more code execution servers associated with execution of the one or more selected files are predictively allocated based on a predicted code execution performance score. One or more code execution metrics are captured associated with executing the one or more selected files on the one or more code execution servers.
Detecting performance regressions in software for controlling autonomous vehicles
The disclosure relate to detecting performance regressions in software used to control autonomous vehicles. For instance, a simulation may be run using a first version of the software. While the simulation is running, CPU and memory usage by one or more functions of the first version of the software may be sampled. The sampled CPU and memory usage may be compared to CPU or memory usage by each of the one or more functions in a plurality of simulations each running a corresponding second version of the software. Based on the comparisons, an anomaly corresponding to a performance regression in the first version of the software relating to one of the one or more functions may be identified. In response to detecting the anomaly, the first version of the software and the one of the one or more functions may be flagged for review.
Techniques for Improved Statistically Accurate A/B Testing of Software Build Versions
A data processing system for A/B testing software product builds herein implements dividing a group of user devices into a first subset and a second subset of user devices to participate in a controlled build rollout of a second version of the software product, sending a first signal to the first subset of user devices to cause the first subset of computing devices to reinstall a first version of the software product which has previously been installed on the first subset of user devices, sending a second signal to the second subset of user devices to cause the second subset of computing devices to install a second version of the software product, collecting telemetry data from the user devices of the first and second subsets of user devices, and comparing the performance of the first and second versions based on the telemetry data.
ENHANCED APPLICATION PERFORMANCE FRAMEWORK
This document describes a framework for measuring and improving the performance of applications, such as distributed applications and web applications. In one aspect, a method includes performing a test on an application. The test includes executing the application on one or more computers and, while executing the application, simulating a set of workload scenarios for which performance of the application is measured during the test. While performing the test, a set of performance metrics that indicate performance of individual components involved in executing the application during the test is obtained. A knowledge graph is queried using the set of performance metrics. The knowledge graph links the individual components to corresponding performance metrics and defines a set of hotspot conditions that are each based on one or more of the corresponding performance metrics for the individual components. A given hotspot condition is detected based on the set of performance metrics.