Patent classifications
G06F11/3688
REST API Validation
Embodiments validate representational state transfer (“REST”) application program interfaces (“API”). Embodiments receive a REST API specification that provides information for a plurality of REST APIs and parse the REST API specification to extract, for each REST API, a corresponding Uniform Resource Locator (“URL”), and corresponding parameter names response codes and payloads. Embodiments convert the parsed REST API specification into a converted text file, the converting including parameter constraints and parameter default values. Embodiments then generate all possible combinations of test data for each REST API from the converted text file and perform one or more test operations on each of the combinations of test data.
INITIALIZE OPTIMIZED PARAMETER IN DATA PROCESSING SYSTEM
An approach is provided in which the approach loads a machine learning model and a set of test case statistical data into a user system. The set of test case statistical data is based on a set of test cases corresponding to the machine learning model and includes a plurality of input parameter sets and a corresponding set of output quality measurements. The approach compares user data on the user system against the set of test case statistical data and identifies one of the plurality of input parameter sets to optimize the machine learning model based on the set of output quality measurements. The approach generates an optimized machine learning model using the machine learning model and the identified input parameter set at the user system.
NETWORK BASED TESTING OF MOBILE DEVICE KERNELS SYSTEM AND METHOD
A method is disclosed and includes determining a test plan to test a kernel on a mobile device, and determining an interaction input message according to the test plan, the interaction input message comprising first data. The method also includes transmitting the interaction input message comprising the first data to the mobile device over a network-based communication channel. The kernel in the mobile device generates the interaction output message in response to receiving the interaction input message. The method also includes receiving, from the mobile device, the interaction output message comprising second data from the mobile device over the network-based communication channel, and determining if the interaction output message is consistent with the test plan.
DEVICE TESTING ARRANGEMENT
An arrangement for automated testing of mobile devices comprising a learning arrangement for learning how to use test devices that do not match with an earlier already defined test case pattern. In the arrangement the learning arrangement generates instructions for performing a set of tasks. The tasks are then executed in the mobile device being tested. The mobile device provides feedback in form of error/success messages, screenshots, source code, return values and similar. Based on the feedback and earlier accumulated information the learning entity can generate a new set of instructions in order to execute the set of tasks successfully.
INTELLIGENT VALIDATION OF NETWORK-BASED SERVICES VIA A LEARNING PROXY
Techniques described herein are directed to the intelligent validation of network-based services via a proxy. The proxy is communicatively coupled to a first network-based service and a second network-based service. The proxy is utilized to validate the functionality of the first network-based service with respect to the second network-based service. The proxy initially operates in a first mode in which the proxy monitors and analyzes the transactions between the first and second network-based services and learns the behavior of the second network-based service. The proxy then operates in a second mode in which the proxy simulates the learned behavior of the second network-based service. When operating in the second mode, requests initiated by the first network-based service and intended for the second network-based service are provided to the proxy, and the proxy generates a response to the request in accordance with the learned behavior of the second network-based service.
VERIFICATION OF CONTROL COUPLING AND DATA COUPLING ANALYSIS IN SOFTWARE CODE
Methods and systems for verifying control coupling analysis in testing of software code include: selecting a source file to be tested, the source file having source code, the source file selected from a system set including a plurality of source files from one or more nodes in a system; identifying one or more control couples within the source file by performing static analysis on the source code of the source file; defining one or more test runs of the software code, the one or more test runs including one or more of the identified control couples, and the one or more test runs using dynamic analysis; executing the one or more defined test runs; identifying control coupling coverage of the source file based on the dynamic analysis; and generating a control coupling report based on the identified control coupling coverage of the source file.
METHOD AND SYSTEM FOR FUZZING WINDOWS KERNEL BY UTILIZING TYPE INFORMATION OBTAINED THROUGH BINARY STATIC ANALYSIS
Disclosed is a window kernel fuzzing technique utilizing type information obtained through binary static analysis. The method of fuzzing a kernel of a computer operating system performed by a fuzzing system may include the steps of: automatically inferring type information of a system call using a library file provided by the computer operating system; and performing system call fuzzing on the basis of the type information of the system call obtained through the inference.
COMPUTE PLATFORM FOR MACHINE LEARNING MODEL ROLL-OUT
There are provided systems and methods for a compute platform for machine leaning model roll-out. A service provider, such as an electronic transaction processor for digital transactions, may provide intelligent decision-making through decision services that execute machine learning models. When deploying or updating machine learning models in these engines and decision services, a model package may include multiple models, each of which may have an execution graph required for model execution. When models are tested from proper execution, the models may have non-performant compute items, such as model variables, that lead to improper execution and/or decision-making. A model deployer may determine and flag these compute items as non-performant and may cause these compute items to be skipped or excluded from execution. Further, the model deployer may utilize a pre-production computing environment to generate the execution graphs for the models prior to deployment or upgrading.
Emulated edge locations in cloud-based networks for testing and migrating virtualized resources
Various techniques for emulating edge locations in cloud-based networks are described. An example method includes generating an emulated edge location in a region. The emulated edge location can include one or more first computing resources in the region. A host in the region may launch a virtualized resource a portion of the one or more first computing resources. Output data that was output by the virtualized resource in response to input data can be received and reported to a user device, which may provide a request to migrate the virtualized resource to a non-emulated edge location. The non-emulated edge location may include one or more second computing resources that are connected to the region by an intermediary network. The virtualized resource can be migrated from the first computing resources to at least one second computing resource in the non-emulated edge location.
Systems and methods for margin based diagnostic tools for priority preemptive schedulers
In one embodiment, a method for margin determination for a computing system with a real time operating system and priority preemptive scheduling comprises: scheduling a set of tasks to be executed in one or more partitions, wherein each is assigned a priority, wherein the tasks comprise periodic and/or aperiodic tasks; executing the set of tasks on the computing system within the scheduled periodic time window; introducing an overhead task executed for an execution duration controlled either by the real time operating system or by the overhead task; controlling the overhead task to converge on a point of failure at which a length of the execution duration of the overhead task causes either: 1) a periodic task to fail to execute within a deadline, or 2) time available for the aperiodic tasks to execute to fall below a threshold; and defining a partition margin corresponding to the point of failure.