Patent classifications
G06F11/3457
Digital twin workflow simulation
Systems, methods and computer program products for simulating workflows and activities of physical assets using digital twin models. User-defined simulations are performed by selectin digital twin components being analyzed during the simulation, concentrating the analysis on the selectively defined components and bypassing components that will not be simulated. Users can design the digital twin simulation using one or more available digital twin models. The model can be the most current digital twin model, a previous version of a model or a hybridized model comprising components or portions from multiple versions of the available digital twins. Users can further customize simulations by selecting components or sections of the digital twin model to selectively bypass during the simulation or provide overriding values for non-simulated portions of the digital twin which can be used as entry criteria inputted into the next simulated section or component of the digital twin, to complete the simulation.
Role-based failure response training for distributed systems
Methods, systems, and computer-readable media for role-based failure response training for distributed systems are disclosed. A failure response training system determines a failure mode associated with an architecture for a distributed system comprising a plurality of components. The training system generates a scenario based at least in part on the failure mode. The scenario comprises an initial state of the distributed system which is associated with one or more metrics indicative of a failure. The training system provides, to a plurality of users, data describing the initial state. The training system solicits user input representing modification of a configuration of the components. The training system determines a modified state of the distributed system based at least in part on the input. The performance of the distributed system in the modified state is indicated by one or more modified metrics differing from the one or more initial metrics.
Configuring new storage systems based on write endurance
A method performed by a computing device, of configuring a new design of a new data storage system (DSS) having initial configuration parameters is provided. The new design includes an initial plurality of storage drives. The method includes (a) collecting operational information from a plurality of remote DSSs in operation, the operational information including numbers of writes of various write sizes received by respective remote DSSs of the plurality of remote DSSs over time; (b) modeling a number of drive writes per day (DWPD) of the initial plurality of storage drives of the new DSS based on the collected operational information from the plurality of remote DSSs and the initial configuration parameters; (c) comparing the modeled number of DWPD to a threshold value; and (d) in response to the modeled number of DWPD exceeding the threshold value, reconfiguring the new DSS with an updated design.
MANAGEMENT COMPUTER AND COMPUTER SYSTEM MANAGEMENT METHOD
The management computer stores a configuration information of a storage, a configuration information of a host computer and a VM, an information on a service level of the VM, and a performance information of a storage subsystem and a network. If an access path that the host computer uses to access a volume is changed in response to a change of storage configuration, an I/O performance of the VM operating in the host computer may be changed. If the change of state of the storage is detected, the management computer calculates a change of state of whether a service level defined for the VM is satisfied, and selects an appropriate host computer in which the VM should be operated.
Virtual dialog system performance assessment and enrichment
Embodiments are provided that relate to a computer system, a computer program product, and a computer-implemented method for improving performance of a virtual dialog agent system employing an automated virtual dialog agent. Embodiments involve generating ground truth (GT) from a user's knowledge base, and leveraging the GT to evaluate how the virtual dialog agent performs with the GT. The evaluation measures quality of a multi-turn virtual dialog, and generates a remediation plan directed at an algorithmic improvement of the virtual dialog agent.
METHOD AND APPARATUS FOR TESTING AI CHIP COMPUTING PERFORMANCE, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
Provided are a method and an apparatus for testing AI chip computing performance, and a non-transitory computer-readable storage medium. The method includes: forming computing performance result data of a to-be-tested AI chip according to a plurality of items of simulation data formed in a development process of the to-be-tested AI chip; acquiring a function instruction set matched with a to-be-tested service function, wherein the function instruction set is composed of a plurality of instructions in a standard instruction set matched with the to-be-tested AI chip; and predicting computing time required by the to-be-tested AI chip to execute the to-be-tested service function according to the function instruction set and the computing performance result data.
Simulated Data Center
A system, method, and computer-readable medium are disclosed for performing a data center monitoring and management operation. The data center monitoring and management operation includes: selecting a data center asset for simulation; identifying a set of session input data for use during simulation; and, performing a data center asset simulation session operation for the data center asset based upon the set of session input data.
ANALYZING PERFORMANCE METRICS FOR IMPROVING TECHNOLOGY ENVIRONMENT OF A SOFTWARE APPLICATION
A system is configured to obtain a plurality of performance metrics related to performance of a software application in a current application environment and each of a plurality of model application environments. The system assigns a score to each of the performance metrics collected for the current application environment and each of the model application environments, compares the respective scores assigned to each performance metric collected for the current application environment and each of the model application environments, and detects that at least one model application environment has a higher score associated with at least one performance metric as compared to the respective score of the at least one performance metric collected for the current application environment. The system determines a recommendation to use the at least one model application environment for the software application based on the detecting.
Application link resource scaling method, apparatus, and system based on concurrent stress testing of plural application links
Application link scaling method, apparatus and system are provided. The method includes obtaining an application link, the application link being a path formed by at least two associated applications for a service scenario; determining information of target resources required by capacity scaling for all applications in the application link; allocating respective resources to the applications according to the information of the target resources; and generating instances for the applications to according the respective resources. From the perspective of services, the method performs capacity assessment for related applications on a link as a whole, and capacity scaling of the entire link, thus fully utilizing resources, and preventing the applications from being called by other applications which results in insufficient resources. This ensures the applications not to become the vulnerability of a system, ensures the stability of the system, avoids allocating excessive resources to the applications, and reduces a waste of resources.
Virtualization of complex networked embedded systems
A testing and verification system for an equivalent physical configuration of an in-flight entertainment and communications system with one or more hardware components includes a virtual machine manager. One or more virtual machines each including a hardware abstraction layer is instantiated by the virtual machine manager according to simulated hardware component definitions corresponding to the equivalent physical configuration of the hardware components. The virtual machines are in communication with each other over virtual network connections. A test interface to the one or more virtual machines generate test inputs to target software applications installed on the virtual machines. A display interface is connected to the virtual machines, with results from the execution of the target software applications responsive to the test inputs are output thereto.