Patent classifications
G06F11/3457
3D model evaluation system
A 3D model evaluation system includes: a loading unit that loads 3D model data created by 3D CAD; a history checking unit that checks a creation history which is added to the 3D model data loaded by the loading unit and which is obtained in a case where the 3D model data is created by the 3D CAD; and an evaluation unit that evaluates a degree of coincidence between the creation history of the 3D model data checked by the history checking unit and a predetermined rule.
Medical device system performance index
A distributed network system and method includes a processing unit configured to manage safety data for a plurality of medical devices, a database software component in communication with the processing unit, and a monitoring software component in communication with the processing unit. The monitoring software component is configured to monitor a number of messages between a number of medical devices and the processing unit, to process performance parameters to generate an overall performance index, and to generate an output that is viewable by a user. The output includes relative contributions of each of the performance parameters to the overall performance index, where the overall performance index is generated using a weighting factor associated with each of the performance parameters. The performance parameters include the number of messages waiting to be processed, which has the largest weighting factor, and a disk queue length, which has the smallest weighting factor.
COMPUTER-IMPLEMENTED METHOD FOR SCENARIO-BASED TESTING AND / OR HOMOLOGATION OF AT LEAST PARTIALLY AUTONOMOUS DRIVING FUNCTIONS TO BE TESTED BY MEANS OF KEY PERFORMANCE INDICATORS (KPI)
A computer-implemented method for the evaluation of simulations and/or test cases in scenario-based testing and/or homologation of at least partially autonomous driving functions to be tested by key performance indicators (KPI), wherein KPIs are mapped by KPI plug-ins and KPI plug-ins are selected dynamically and reusably for simulations and/or test cases and wherein at least one KPI plug-in is selected by a KPI plug-in mechanism for purposes of simulation and/or test definition and is automatically executed in the execution by the KPI plug-in mechanism.
COMPUTER-IMPLEMENTED METHOD FOR AUTOMATICALLY PROVIDING AN ADVICE FOR TEST PROCESSES
A computer-implemented method for the automatic provision of an Advice for test processes, wherein the Advice is determined by at least one Advice-generator and the Advice-generator for tests and/or simulations is selected manually and/or automatically, wherein the Advice-generator monitors at least two test runs, so that at least one event is detected in the test runs and at least one Advice is derived, wherein the Advice-generator is executed automatically by an Advice-generator mechanism during test runs and an Advice determined by the Advice-generator is provided to the test system and/or a user.
METHODS AND SYSTEMS FOR MONITORING DISTRIBUTED DATA-DRIVEN MODELS
A system for monitoring a data-driven model, configured to perform a task in a plurality of sites, includes a plurality of variants of the data-driven model deployed in each site. Each variant is used in one of a plurality of states including a first state wherein the output data of the variant is included in computing a result of the task, and a second state wherein the output data of the variant is excluded from computing the result of the task. A supervision module in each site monitors the plurality of variants, computes the task result based on the output data generated by each variant being used in the first state, and changes, based on the output data generated by variants being used in the first state and in the second state, the use state of a variant from one state to another.
Systems and methods for program code defect and acceptability for use determination
A code development engine can be programmed to evaluate build code that can be representative of program code at an instance of time during or after a software development of the program code to identify and correct coding errors in the build code. A code run-time simulation engine can be programmed to simulate the build code in a modeled program code environment for the program code to identify and correct coding failures in the build code. A build code output module can be programmed to evaluate the build code to determine whether the build code is acceptable for use in a program code environment based on a level of acceptable risk for the build code in response to the coding error and/or coding failure being corrected in the build code.
NETWORK WORKFLOW REPLAY TOOL
A method of automatically identifying and recreating tenants environment issues in a set of datacenters by a workflow replay tool is provided. Each datacenter includes a network manager server. The method analyzes, by the workflow replay tool, a set of log files generated in the particular tenant's environment to identify tenant's workflows. The method analyzes, by the workflow replay tool, network manager server databases of the tenant's environment to identify the logical entities in the tenant environment used by the identified workflows. The method allocates resources in a lab environment to simulate the tenant's environment. The method reruns the identified tenant's workflows by the workflow replay tool using the allocated resources in the lab environment to recreate tenant environment issues.
Product evaluation using transaction details from a production system
Techniques are disclosed relating to accessing, by an evaluation computer system, transaction details from a subset of current transactions being processed by a production version of a transaction processing service that is implemented on a production computer system. The evaluation computer system may perform, in real-time, tests on a particular product using the transaction details. The evaluation computer system may then store output from the tests that are performed using the transaction details.
Performance simulation for selected platforms for web products in database systems
In accordance with embodiments, there are provided mechanisms and methods for facilitating performance simulation for selected platforms for web products in database systems according to one embodiment. In one embodiment and by way of example, a method includes evaluating metadata associated with contents relating to a web product to be delivered through one or more platforms, where the metadata identifies the one or more platforms, and analyzing the one or more platforms to host the web product to deliver the contents. The method may further include identifying one or more performance factors associated with the web product and the one or more platforms, where the one or more performance factors are identified based one or more parameters associated with the one or more platforms to enhance performance associated with the web product when delivering the contents, and facilitating adjustments to one or more virtual dials to facilitate one or more modifications to the one or more parameters.
Systems and methods for optimizing a machine learning-informed automated decisioning workflow in a machine learning task-oriented digital threat mitigation platform
A system and method for adapting an errant automated decisioning workflow includes reconfiguring digital abuse or digital fraud logic parameters associated with automated decisioning routes of an automated decisioning workflow in response to identifying an anomalous drift or an anomalous shift in efficacy metrics of the automated decisioning workflow, wherein the automated decisioning workflow includes a plurality of distinct automated decisioning routes that, when applied in a digital threat evaluation of data associated with a target digital event, automatically compute a decision for disposing the target digital event based on a probability digital fraud; simulating, by computers, a performance of the automated decisioning routes in a reconfigured state based on inputs of historical digital event data; calculating simulation metrics based on simulation output data of the simulation; and promoting to an in-production state the automated decisioning workflow having the automated decisioning routes in the reconfigured state.