G06F11/1415

Master network techniques for a digital duplicate

Disclosed herein are techniques and tools for verifying data for semantic correctness and/or verifying data for network correctness. In one respect, a method includes receiving an input defining at least two master nodes and at least one master link, each master node having at least one or more respective data properties populated with master node data and the master link having at least one or more master link data, the master nodes and master link defining a master semantic network, importing source data into a second semantic network, comparing the source data to the master node data and making a first determination that the source data reflects a data relationship defined by the master node data, and based on the first determination, populating the source data into the second semantic network, wherein the source data populated within the second semantic network reflects the data relationship defined by the master node data and the master link data.

Remote debug for scaled computing environments

Techniques and apparatus for remotely accessing debugging resources of a target system are described. A target system including physical compute resources, such as, processors and a chipset can be coupled to a controller remotely accessible over a network. The controller can be arranged to facilitate remote access to debug resources of the physical compute resources. The controller can be coupled to debug pin, such as, those of a debug port and arranged to assert control signals on the pins to access debug resources. The controller can also be arranged to exchange information elements with a remote debug host to include indication of debug operations and/or debug results.

Workflows for automated operations management
11709735 · 2023-07-25 · ·

Techniques are disclosed relating to automated operations management. In various embodiments, a computer system accesses operational information that defines commands for an operational scenario and accesses blueprints that describe operational entities in a target computer environment related to the operational scenario. The computer system implements the operational scenario for the target computer environment. The implementing may include executing a hierarchy of controller modules that include an orchestrator controller module at top level of the hierarchy that is executable to carry out the commands by issuing instructions to controller modules at a next level. The controller modules may be executable to manage the operational entities according to the blueprints to complete the operational scenario. In various embodiments, the computer system includes additional features such as an application programming interface (API), a remote routing engine, a workflow engine, a reasoning engine, a security engine, and a testing engine.

Systems and methods for host image transfer

Methods and systems for transferring a host image of a first machine to a second machine, such as during disaster recovery or migration, are disclosed. In one example, a first profile of a first machine of a first type is compared to a second profile of a second machine of a second type different from the first type, to which the host image is to be transferred. The first and second profiles each comprise at least one property of the first type of first machine and the second type of second machine, respectively. At least one property of a host image of the first machine is conformed to at least one corresponding property of the second machine. The conformed host image is provided to the second machine, via a network. The second machine is configured with at least one conformed property of the host image.

SYSTEM AND METHOD FOR A DISASTER RECOVERY ENVIRONMENT TIERING COMPONENT MAPPING FOR A PRIMARY SITE
20230229551 · 2023-07-20 ·

A method for managing specialized hardware resources includes obtaining, by a disaster recovery (DR) virtual resource agent, a request for a DR environment for a set of virtual resources in a primary site, in response to the request: monitoring the primary site to obtain virtual workload information corresponding to the set of virtual resources, performing a workload analysis on the set of virtual resources in the primary site using the virtual workload information to obtain a virtual resource mapping of each virtual resource in the primary site to a tiered component in the DR environment, and initiating a DR environment allocation of DR virtual resources based on the virtual resource mapping.

System, method, and computer program for a microservice lifecycle operator

As described herein, a system, method, and computer program are provided for a microservice lifecycle operator. In use, at least one specification for a microservice is identified. Further, a lifecycle of the microservice is managed, using a lifecycle operator and the at least one specification.

TECHNIQUES TO PROVIDE SELF-HEALING DATA PIPELINES IN A CLOUD COMPUTING ENVIRONMENT
20230214289 · 2023-07-06 · ·

Embodiments may generally be directed to systems and techniques to detect failure events in data pipelines, determine one or more remedial actions to perform, and perform the one or more remedial actions.

Techniques for command execution using a state machine

Techniques for processing a request may include: providing tasks to a state machine framework, wherein the tasks perform processing of a workflow for servicing the request; generating, by the state machine framework, a state machine for processing the request, wherein the state machine includes states associated with the tasks, wherein generating the state machine may include automatically determining a first state transition of the state machine between a first and a second of the states; receiving the request; and responsive to receiving the request, performing first processing using the state machine to service the request. The framework may automatically generate triggers that drive the state machine to determine subsequent states in accordance with defined state transitions. State machine internal state information may be persistently stored and used in restoring the state machine to one of its states in connection processing of the command.

Hang detection and remediation in a multi-threaded application process

Detecting non-callable external component APIs is provided. It is determined whether a first function call stack of a worker thread in a multi-threaded application of the computer matches a second function call stack of the worker thread. In response to determining that the first function call stack matches the second function call stack of the worker thread, an external component application programming interface (API) corresponding to the worker thread is identified from a function call stack of the worker thread. The external component API corresponding to the worker thread is marked as non-callable in an API state map. The worker thread is marked as being in a hang state. The worker thread in the hang state is terminated as a remediation action step to maintain performance.

Replicating data using a replication server of a multi-user system

A context-driven multi-user system may include distributed computing resource(s) that replicate proper subset(s) of user-relevant context data to computing device(s). A replication server may receive an update record corresponding to stored context data and determine propagation records based at least in part thereon. Each propagation record may correspond to a respective, different proper subset of the context data. The replication server may transmit the propagation records to respective replication clients. A replication client may receive a propagation record and modify local context data in response. The replication client may receive a data record and determine an update record in response and using the local context data. The computing device may transmit the determined update record. A server may receive a query specification referencing a data source and transmit the query specification to a replication client of the multi-user system, the client corresponding to the data source.