Patent classifications
H04L41/5077
High performance compute infrastructure as a service
A high performance computing environment includes a plurality of computing resources, a plurality of tenant clouds organized from the plurality of computing resources, and an Infrastructure as a Service resource manager. The Infrastructure as a Service resource manager further includes a plurality of Infrastructure as a Service system interfaces and a portal. In operation, a cloud user interacts over a secure link and through the portal with the Infrastructure as a Service system interfaces to perform cloud tasks relative to a particular one of a plurality of tenant clouds of the high performance computing environment.
MODEL DRIVEN PROCESS FOR AUTOMATED DEPLOYMENT OF DOMAIN 2.0 VIRTUALIZED SERVICES AND APPLICATIONS ON CLOUD INFRASTRUCTURE
A model-driven system automatically deploys a virtualized service, including multiple service components, on a distributed cloud infrastructure. A master service orchestrator causes a cloud platform orchestrator to retrieve a cloud services archive file, extract a cloud resource configuration template and create cloud resources at appropriate data centers as specified. The master service orchestrator also causes a software defined network controller to retrieve the cloud services archive file, to extract a cloud network configuration template and to configure layer 1 through layer 3 virtual network functions and to set up routes between them. Additionally, the master service orchestrator causes an application controller to retrieve the cloud services archive file, to extract a deployment orchestration plan and to configure and start layer 4 through layer 7 application components and bring them to a state of operational readiness.
Providing user subscription nomadicity in wireline broadband networks
In general, techniques are described for providing user nomadicity in wireline broadband networks. A network device positioned in a wireline broadband network comprising a processor and an interface may be configured to perform the techniques. The processor may be configured to execute a first virtual customer premises equipment to provide, to a first subscriber, access to the wireline broadband network from a first subscription point in accordance with a first subscription. The processor may also be configured to provide, to a second subscriber, access to the wireline broadband network from the first subscription point in accordance with a second subscription. The interface may be configured to forward, in accordance with the first subscription, traffic received from the first subscription point and associated with the first subscriber, and forward, in accordance with the second subscription, traffic received from the first subscription point and associated with the second subscriber.
METHODS AND SYSTEMS FOR LOOSELY COUPLED PCIe SERVICE PROXY OVER AN IP NETWORK
PCIe devices installed in host computers communicating with service nodes can provide virtualized and high availability PCIe functions to host computer workloads. The PCIe device can receive a PCIe TLP encapsulated in a PCIe DLLP via a PCIe bus. The TLP includes a TLP address value, a TLP requester identifier, and a TLP type. The PCIe device can terminate the PCIe transaction by sending a DLLP ACK message to the host computer in response to receiving the TLP. The TLP packet can be used to create a workload request capsule that includes a request type indicator, an address offset, and a workload request identifier. A workload request packet that includes the workload request capsule can be sent to a virtualized service endpoint. The service node, implementing the virtualized service endpoint, receives a workload response packet that includes the workload request identifier and a workload response payload.
CUSTOMER ACTIVATION ON EDGE COMPUTING ENVIRONMENT
A system and method for providing on-demand edge compute. The system may include an orchestrator that provides a UI and that controls an abstraction layer for implementing a workflow for providing on-demand edge compute. The abstraction layer may include a server configuration orchestration (SCO) system (e.g., a Metal-as-a-Service (MaaS) system) and API that may provide an interface between the orchestrator and the SCO. The API may enable the orchestrator to communicate with the SCO for receiving requests that enable the SCO to integrate with existing compute resources to perform various compute provisioning tasks (e.g., to build and provision a server instance). The various tasks, when executed, may provide on-demand edge compute service to users. The SCO API may further enable the ECS orchestrator to receive information from the SCO (e.g., compute resource information, status messages).
Model driven process for automated deployment of domain 2.0 virtualized services and applications on cloud infrastructure
A model-driven system automatically deploys a virtualized service, including multiple service components, on a distributed cloud infrastructure. A master service orchestrator causes a cloud platform orchestrator to retrieve a cloud services archive file, extract a cloud resource configuration template and create cloud resources at appropriate data centers as specified. The master service orchestrator also causes a software defined network controller to retrieve the cloud services archive file, to extract a cloud network configuration template and to configure layer 1 through layer 3 virtual network functions and to set up routes between them. Additionally, the master service orchestrator causes an application controller to retrieve the cloud services archive file, to extract a deployment orchestration plan and to configure and start layer 4 through layer 7 application components and bring them to a state of operational readiness.
Network path selection
A method may include monitoring a network performance metric for multiple paths to a destination through a network, and storing historical performance data for the paths. The method may also include receiving a data flow directed to the destination, where the data flow may be subject to a network performance agreement. The method may additionally include determining aggregate historical performances for the paths, and comparing the aggregate historical performances for the paths. The method may also include, based on the comparison of the aggregate historical performances, routing the data flow through the network.
Supporting the fulfilment of E2E QoS requirements in TSN-3GPP network integration
A method including obtaining parameters for a flow from a first network through a second network, the parameters including: a maximum protocol data unit volume PDUV.sub.max in the first network, a maximum flow bit rate MFBR in the second network, a guaranteed flow bit rate GFBR in the second network, and a maximum protocol data unit delay budget in the second network; deriving from the obtained parameters: a maximum delay a packet of the flow experiences in the second network, wherein the maximum delay is a sum of a maximum PDUV.sub.max dependent contribution and a maximum PDUV.sub.max independent contribution, a minimum delay the packet experiences in the second network, wherein the minimum delay is a sum of a minimum PDUV.sub.max dependent contribution and a minimum PDUV.sub.max independent contribution.
Artifact lifecycle management on a cloud computing system
An artifact lifecycle management on a cloud computing system and methods of managing are disclosed herein. In one embodiment, a method includes providing an integrated development environment for developing an artifact to be deployed in a productive environment of the cloud computing system; generating an artifact package associated with the artifact based on inputs received via the integrated development environment; performing one or more tests on the artifact package using the integrated development environment based on or more test cases; performing one or more validation checks on the artifact to be deployed in the productive environment; deploying the validated artifact in the productive environment; provisioning the deployed artifact to one or more tenants of the cloud computing system; and providing access to the artifact to the one or more tenants based on a role and permissions assigned to each of the tenants.
Systems and methods for providing individualized communication service
A method for providing individualized communication service includes (1) recognizing a first client being communicatively coupled to a first local communication network, (2) determining an identity of the first client, (3) transporting first data between the first client and a first operator communication network, using the first local communication network in accordance with a first service profile associated with the first client, and (4) transporting the first data using the first operator communication network in accordance with the first service profile.