Patent classifications
G06F11/3608
Method for verifying software
A method for verifying an operating software block. The operating software block to be verified is defined based on an operating software. Function inputs and outputs corresponding to the operating software block are ascertained. A multi-dimensional parameter space is defined, each dimension of which corresponding to a function input of the operating software block. Input data tuples are formed based on predetermined rules, which correspond to points within specifiable limits of the parameter space. The operating software block is executed using the input data tuples in order to obtain output data, so that for every function output a dependency on the input data of the function inputs is ascertained. The dependency of the function outputs is compared with a specified standard dependency. A reaction is initiated based on a deviation between the dependency of a function output and the standard dependency.
SEMANTIC METADATA VALIDATION
Disclosed herein are system, method, and computer program product embodiments for validating resources within an IT system using a syntax agnostic validation mechanism. Metadata objects describing a resource may be generated by multiple processes in the IT system and be of different metadata formats. These metadata objects may be parsed into a unified semantic graph over which validation rules may be applied. The semantic graph and a validation ruleset comprising one or more validation rules may be input into a validation engine. The validation engine may interpret the validation into logical assertions then apply them over the semantic graph. The validation engine may then generate a validation report indicating whether the graph is conformant. The validation report may include information about any validation failures that may have occurred and may be displayed to a user on a client device via a graphical user interface.
AUTOMATIC NON-CODE TEST SUITE GENERATION FROM API SPECIFICATION
Disclosed herein are system, method, and computer program product embodiments for automatic non-code test suite generation of an application programming language (API) specification. An embodiment operates by receiving a specification of an API, wherein the API comprises a plurality of endpoints. The embodiment generates, using a parser, an abstraction model corresponding to the specification of the API, wherein the abstraction model comprises a plurality of entities corresponding to the plurality of endpoints. The embodiment identifies, based on the abstraction model, an operation that is applicable to an entity of the plurality of entities. The embodiment then generates a functional test based on a use case corresponding to the entity and the operation.
SYSTEM AND METHOD FOR PROVIDING PROVABLE END-TO-END GUARANTEES ON COMMODITY HETEROGENEOUS INTERCONNECTED COMPUTING PLATFORMS
Disclosed herein is a system architecture that structures commodity heterogeneous interconnected computing platforms around universal object abstractions, which are a fundamental system abstraction and building block that provides practical and provable end-to-end guarantees of security, correctness, and timeliness for the platform.
Systems and methods for testing models
This application relates to systems and methods for automatically generating experiments based on experiment requests routed to micro-services (model sub-components) using a prefix-based routing mechanism. In some examples, experiment requests may parsed to determine lower layer services (e.g., components) whose properties need to be changed for a model iteration. Prefixes in requests may be used to route the experiment requests and portions thereof to appropriate services or layers for configuration at the micro-service level. Routing tables at each higher layer may be utilized to determine the correct sub-layers to redirect a request and/or portion thereof. At micro-service level, each micro-service may store and use a configuration table to match a received parameter in a request with a property and its corresponding value for the experiment.
Verifying confidential machine learning models
Methods, systems, and computer program products for verifying confidential machine learning models are provided herein. A computer-implemented method includes obtaining (i) a set of training data and (ii) a request, from a requestor, for a machine learning model, wherein the request is accompanied by at least a set of test data; obtaining a commitment from a provider in response to the request, the commitment comprising a special hash corresponding to parameters of a candidate machine learning model trained on the set of training data; revealing the set of test data to the requestor; obtaining, from the requestor, (i) a claim of performance of the candidate machine learning model for the test data and (ii) a proof of the performance of the candidate machine learning model; and verifying the claimed performance for the requestor based on (i) the special hash and (ii) the proof of the claimed performance.
Development System and Method for a Conversational Application
A method, computer program product, and computing system for enabling usage of a conversational application by a plurality of users; gathering usage data concerning usage of the conversational application by the plurality of users; defining a visual representation of the conversational application; and overlaying the usage data onto the visual representation of the conversational application to generate visual traffic flow data.
METHOD AND SYSTEM FOR IDENTIFYING STATIC ANALYSIS ALARMS BASED ON SEMANTICS OF CHANGED SOURCE CODE
This disclosure relates generally to method and system for identifying static analysis alarms based on semantics of changed source code. The disclosed technique is integrated in the proprietary static analysis tool that identifies semantics of the change and reports only impacted alarms. The method receives source code and a property over variables to be verified for identifying one or more impacted alarms. Further, an incremental analysis based on the one or more change program points are performed to mark one or more impacted functions in the current version of the source code and then generating a data flow analysis (DFA) and a program dependence graph (PDG) for the one or more impacted functions. Further, a change-based alarm identification technique is utilized for the one or more impacted static analysis alarms from the one or more impacted functions in the current version of source code based on semantics of change.
Server and control method thereof
A control method of a server is provided. The method includes acquiring code information about a program, identifying at least one error with respect to a code style included in the code information based on a predetermined code style rule, acquiring at least one error information with respect to the identified code style, and modifying the code style by inputting the code information and the error information to an artificial intelligence model in which the code style rule is trained.
Automated and adaptive validation of a user interface
Aspects of the disclosure relate to an automated and adaptive validation of a user interface. A computing platform may extract, from a webpage, one or more components of the webpage. Subsequently, the computing platform may determine, for a component, one or more attributes and one or more rules. Then, the computing platform may associate, by applying a clustering algorithm and based on the one or more attributes and the one or more rules, the component with a cluster of a plurality of clusters. Then, the computing platform may retrieve, from a database and for the cluster, a master test script. Subsequently, the computing platform may generate, from the master script, a test script for execution, and may run, for the webpage, the test script to validate the component. Subsequently, the computing platform may trigger one or more recommendations based on a determination whether the test script is successful.