Patent classifications
G06F8/656
COHERENCE-BASED DYNAMIC CODE REWRITING, TRACING AND CODE COVERAGE
A device tracks accesses to pages of code executed by processors and modifies a portion of the code without terminating the execution of the code. The device is connected to the processors via a coherence interconnect and a local memory of the device stores the code pages. As a result, any requests to access cache lines of the code pages made by the processors will be placed on the coherence interconnect, and the device is able to track any cache-line accesses of the code pages by monitoring the coherence interconnect. In response to a request to read a cache line having a particular address, a modified code portion is returned in place of the code portion stored in the code pages.
COHERENCE-BASED DYNAMIC CODE REWRITING, TRACING AND CODE COVERAGE
A device tracks accesses to pages of code executed by processors and modifies a portion of the code without terminating the execution of the code. The device is connected to the processors via a coherence interconnect and a local memory of the device stores the code pages. As a result, any requests to access cache lines of the code pages made by the processors will be placed on the coherence interconnect, and the device is able to track any cache-line accesses of the code pages by monitoring the coherence interconnect. In response to a request to read a cache line having a particular address, a modified code portion is returned in place of the code portion stored in the code pages.
METHODS AND APPARATUS TO FACILITATE CONTENT GENERATION FOR CLOUD COMPUTING PLATFORMS
Methods, apparatus, systems, and articles of manufacture are disclosed to facilitate content generation for cloud computing platforms. An example apparatus includes model definition circuitry to generate model definitions representative of one or more undefined target system objects in a target system, and generate instructions that cause a developer environment to provide the model definitions during a generation of content files, generation order circuitry to generate a processing order of the content files, the content files having one or more defined model objects, and object processing circuitry to convert the one or more defined model objects to defined target system objects for deployment and execution at the target system.
METHODS AND APPARATUS TO FACILITATE CONTENT GENERATION FOR CLOUD COMPUTING PLATFORMS
Methods, apparatus, systems, and articles of manufacture are disclosed to facilitate content generation for cloud computing platforms. An example apparatus includes model definition circuitry to generate model definitions representative of one or more undefined target system objects in a target system, and generate instructions that cause a developer environment to provide the model definitions during a generation of content files, generation order circuitry to generate a processing order of the content files, the content files having one or more defined model objects, and object processing circuitry to convert the one or more defined model objects to defined target system objects for deployment and execution at the target system.
Telecom Microservice Rolling Upgrades
A method is disclosed for providing a telecom microservice rolling upgrade, the method comprising: providing, by a Service Management and Orchestration (SMO), a new instance of F1 demux in a same cluster and namespace; advertising the new instance of the F1 demux to all PODs and micro services; informing, by the SMO, an old F1 demux to start a version upgrade to a new instance; sending, by the old F1 demux, a trigger to start a reconcile procedure to a new F1 demux; advertising that the old instance of the F1 demux is not available to take up new calls from internal PODs and micro-service, and is accepting traffic via the new F1 demux only; routing, by the old F1 demux, all incoming F1 traffic from a Distributed Unit (DU) to the new F1 demux; and instructing the DU, by the old F1 demux, to add a Transport Network Layer (TNL) association of the new F1 demux.
Telecom Microservice Rolling Upgrades
A method is disclosed for providing a telecom microservice rolling upgrade, the method comprising: providing, by a Service Management and Orchestration (SMO), a new instance of F1 demux in a same cluster and namespace; advertising the new instance of the F1 demux to all PODs and micro services; informing, by the SMO, an old F1 demux to start a version upgrade to a new instance; sending, by the old F1 demux, a trigger to start a reconcile procedure to a new F1 demux; advertising that the old instance of the F1 demux is not available to take up new calls from internal PODs and micro-service, and is accepting traffic via the new F1 demux only; routing, by the old F1 demux, all incoming F1 traffic from a Distributed Unit (DU) to the new F1 demux; and instructing the DU, by the old F1 demux, to add a Transport Network Layer (TNL) association of the new F1 demux.
TECHNIQUES FOR PATCHING IN A DISTRIBUTED COMPUTING SYSTEM
A system may include multiple software components of a software application running on multiple nodes in a distributed computing system, a patch execution server including a patch build server including a structured patch execution module connected to the distributed computing system via a network. The patch execution module receives an uploaded patch, a patch definition file, and a workflow template from a global patch repository. Further, the patch execution module creates a patch definition file for the patch using an associated patch master file, an associated build definition file, and an associated product definition file. Furthermore, the patch execution module creates a workflow template using the patch definition file and the patch. Also, the patch execution module creates a workflow file using node information associated with the multiple nodes and the workflow template. In addition, the patch execution module executed the patch, using the patch, patch definition file and the workflow file, across the multiple nodes in the distributed computing system.
TECHNIQUES FOR PATCHING IN A DISTRIBUTED COMPUTING SYSTEM
A system may include multiple software components of a software application running on multiple nodes in a distributed computing system, a patch execution server including a patch build server including a structured patch execution module connected to the distributed computing system via a network. The patch execution module receives an uploaded patch, a patch definition file, and a workflow template from a global patch repository. Further, the patch execution module creates a patch definition file for the patch using an associated patch master file, an associated build definition file, and an associated product definition file. Furthermore, the patch execution module creates a workflow template using the patch definition file and the patch. Also, the patch execution module creates a workflow file using node information associated with the multiple nodes and the workflow template. In addition, the patch execution module executed the patch, using the patch, patch definition file and the workflow file, across the multiple nodes in the distributed computing system.
INVENTORY MANAGEMENT FOR DATA TRANSPORT CONNECTIONS IN VIRTUALIZED ENVIRONMENT
Aspects of managing inventory for data transport connections within a virtualized computing environment are described. A virtualized management system managing a cluster of host devices obtains a data transport capacity parameter and an aggregate memory consumption value from respective host devices. The virtualized management system further identifies an update status associated with each of the host devices. In response to receiving a data transport connection request, the virtualized management system selects a host from the cluster of hosts to satisfy the data transport connection request based at least in part on the upgrade status, data transport capacity parameter and aggregate memory consumption value.
INVENTORY MANAGEMENT FOR DATA TRANSPORT CONNECTIONS IN VIRTUALIZED ENVIRONMENT
Aspects of managing inventory for data transport connections within a virtualized computing environment are described. A virtualized management system managing a cluster of host devices obtains a data transport capacity parameter and an aggregate memory consumption value from respective host devices. The virtualized management system further identifies an update status associated with each of the host devices. In response to receiving a data transport connection request, the virtualized management system selects a host from the cluster of hosts to satisfy the data transport connection request based at least in part on the upgrade status, data transport capacity parameter and aggregate memory consumption value.