Patent classifications
H04L47/828
Intent-based network virtualization design
Example methods and systems for intent-based network virtualization design are disclosed. One example may comprise: obtaining configuration information and traffic information associated with multiple virtualized computing instances, processing the configuration information and traffic information to identify network connectivity intents and mapping the network connectivity intents to a logical network topology template. Based on a first switching intent, a first group may be assigned to a first logical network domain and the logical network topology template configured to include a first logical switching element. Based on a second switching intent, a second group may be assigned to a second logical network domain and the logical network topology template configured to include a second logical switching element. Based on a routing intent, the logical network topology template may be configured to include a logical routing element.
System and method of sharing edge computing resources
A method and a system of sharing an edge computing resource is disclosed. In an embodiment, the method may include receiving from one or more lessor edge computing resources, one or more first requests for presenting an availability of the one or more lessor edge computing resources, and receiving from a lessee edge computing resource, a second request for availing at least one lessor edge computing resource. The method may further include, upon receiving the second request, presenting the one or more first requests corresponding to the one or more lessor edge computing resources, to the lessee edge computing resource. The method may further include receiving from the lessee edge computing resource, a selection of a first request from the one or more first requests, and creating a connection between the lessee edge computing resource and the lessor edge computing resource corresponding to the received selection.
RESOURCE ALLOCATION CALCULATION APPARATUS AND RESOURCE ALLOCATION CALCULATION METHOD
A resource allocation calculation device includes: a demand prediction unit that for each of a plurality of virtual networks sharing a physical network, predicts demands in units of communications sharing common origin and destination nodes; and an allocation calculation unit that based on the demands predicted by the demand prediction unit, observed past demands in the units of communications and past allocated bandwidths in the units of communications, calculates allocated bandwidths and allocated routes at a current time for the respective units of communications in such a manner that fairness between utilities of the respective virtual networks is maximized, which enhances fairness and efficiency of allocation of resources to a plurality of virtual networks sharing resources of a physical network.
LOAD-BALANCING ESTABLISHMENT OF CONNECTIONS AMONG GROUPS OF CONNECTOR SERVERS
Techniques are described herein that are capable of load-balancing establishment of connections among groups of connector servers in a public computer network by performing operations that include receiving a connection request from a connector client in a private computer network, requesting establishment of a connection between the connector client and one of the connector servers in the public computer network. A number of connections between the private computer network and each group is determined. An identified group is selected from the groups based at least in part on a number of connections between the private computer network and the identified group being less than or equal to a number of connections between the private computer network and each other group. The connection request is provided toward the identified group, which enables establishment of the connection between the connector client and a connector server in the identified group.
Highly available transmission control protocol tunnels
Redundant transmission control protocol tunneling of the present invention channels client application data through the public Internet via a secure UDP channel. By integrating one or more gateway applications interposed between an endpoint and the public Internet using local loopback addresses, the present invention provides network path failover redundancy.
MANAGING CLOUD ACQUISITIONS USING DISTRIBUTED LEDGERS
Systems and methods of the disclosure include: broadcasting, by a cloud resource provisioning component, to a cryptographically-protected distributed ledger, a first transaction comprising a cloud resource request for provisioning a cloud resource; transmitting, to one or more cloud providers, the cloud resource request; receiving, from a first cloud provider of the one or more cloud providers, a first cloud resource offer responsive to the cloud resource request; and broadcasting, to the cryptographically-protected distributed ledger, a second transaction comprising the first cloud resource offer.
METHOD, DEVICE, AND SYSTEM FOR LIMITING DATA RATE OF NETWORK SLICE USER, AND STORAGE MEDIUM
The present disclosure relates to a method, device, and system for limiting a data rate of a network slice user, and a storage medium. The method for limiting the data rate of the network slice user includes: performing, by a policy control function network element, a unified rate limit on a guaranteed bit rate (GBR) data stream and a non-guaranteed bit rate (non-GBR) data stream of the network slice user in case where a user equipment initiates a slice data session establishment request or a slice data session modification request.
METHOD, PRODUCT, AND SYSTEM FOR GENERATING DETECTION SIGNATURES BASED ON ATTACK PATHS IN A COMPUTER NETWORK IDENTIFIED USING A SOFTWARE REPRESENTATION THAT EMBODIES NETWORK CONFIGURATION AND POLICY DATA FOR SECURITY MANAGEMENT USING DETECTION SIGNATURE TEMPLATES
Disclosed is an approach for generating detection signatures based on analysis of a software representation of what is possible in a computer network based on network configuration data and network policy data. In some embodiments, the process includes maintaining a plurality of detection signature templates, generation of detection signatures (detection signature instances) using respective detection signature templates that are selected based on the analysis of the software representation. In some embodiments, detection signatures templates are of different type and may be deployed at different locations based on their respective type(s), such as at source, destination.
METHOD AND DEVICE USED IN COMMUNICATION NODE FOR WIRELESS COMMUNICATION
A method and a device in a communication node for wireless communications are disclosed in the present disclosure. The communication node first receives a first signaling; and then receives a first radio signal in K1 slots and receives a second radio signal in K2 slots; the first signaling is used to determine the K1 and the K2; a first TB is used to generate the first radio signal, while a second TB is used to generate the second radio signal, the first TB comprising a positive integer number of bit(s), and the second TB comprising a positive integer number of bit(s); the K1 slots are divided into X1 slot groups, while the K2 slots are divided into X2 slot groups, and positions of the X1 slot groups and the X2 slot groups are interleaved in time domain. The present disclosure can reduce power consumption and improve coverage performance.
Using edge-optimized compute instances to execute user workloads at provider substrate extensions
Techniques are described for enabling users of a service provider network to create and configure “application profiles” that include parameters related to execution of user workloads at provider substrate extensions. Once an application profile is created, users can request the deployment of user workloads to provider substrate extensions by requesting instance launches based on a defined application profile. The service provider network can then automate the launch and placement of the user's workload at one or more provider substrate extensions using edge-optimized compute instances (e.g., compute instances tailored for execution within provider substrate extension environments). In some embodiments, once such edge-optimized instances are deployed, the service provider network can manage the auto-resizing of the instances in terms of various types of computing resources devoted to the instances, manage the lifecycle of instances to ensure maximum capacity availability at provider substrate extension locations, and perform other instance management processes.