H04L47/828

Providing on-demand production of graph-based relationships in a cloud computing environment
11405328 · 2022-08-02 · ·

Described herein is a system for automatically capturing configuration changes to the cloud computing resources. The system for automatically capturing configuration changes may detect changes to configurations of cloud computing resources across the geographic regions, in real-time. The changes may be stored in a central data storage device instantiated by a central cloud computing account. Furthermore, a relationship graph indicating the relationships between the different cloud computing resources may be generated.

SERVICE PACKET FORWARDING METHOD, APPARATUS, AND COMPUTER STORAGE MEDIUM

This application discloses a service packet forwarding method, an apparatus, and a computer storage medium. In the method, when receiving a packet sent by any second virtual resource module connected to a first SFF, the first SFF dynamically determines a forwarding path of the packet based on configuration information of a second virtual resource module for implementing a second service function, to implement dynamic load balancing on the packet instead of uniformly performing load balancing by a service function chain ingress node, so as to reduce pressure on the service function chain ingress node.

Processing allocation in data center fleets
11394660 · 2022-07-19 · ·

A method and system for allocating tasks among processing devices in a data center. The method may include receiving a request to allocate a task to one or more processing devices, the request indicating a required bandwidth for performing the task, a list of predefined processing device groups connected to a host server and indicating availability of the processing device groups included therein for allocation of tasks and available bandwidth for each available processing device group, assigning the task to a processing device group having an available bandwidth greater than or equal to the required bandwidth for performing the task, and updating the list to indicate that each of the processing device group to which the task is assigned and other processing device group sharing at least one processing device is unavailable. The task may be assigned to an available processing device group having a lowest amount of power needed.

Availability groups of cloud provider edge locations

Techniques are described for enabling users of a cloud provider network to discover “availability groups” provided by a cloud provider network and to request the launch of computing resources into selected availability groups. Some cloud provider networks are expanding the definition of traditional “availability zones” to include new types of availability zones representing various types of provider substrate extension edge locations—including, for example, cloud-provider managed substrate extensions associated with separate control planes, 5G-enabled provider substrate extensions connected to communications service provider networks, and the like. Availability groups can be used to represent various defined collections these new types of provider substrate extensions, where each availability group may be defined such that includes a set of provider substrate extensions with a similar set of characteristics and capabilities.

Compositional reasoning techniques for role reachability analyses in identity systems

Techniques are described for using compositional reasoning techniques to perform role reachability analyses relative to collections of user accounts and roles of a cloud provider network. Delegated role-based resource management generally is a method for controlling access to resources in cloud provider networks and other distributed systems. Many cloud provider networks, for example, implement identity and access management subsystems using this approach, where the concept of “roles” is used to specify which resources can be accessed by people, software, or (recursively) by other roles. An abstraction of the role reachability analysis is provided that can be used as input to a model-checking application to reason about such role reachability questions (e.g., which roles of an organization are reachable from other roles).

SERVER-SIDE RESOURCE MONITORING IN A DISTRIBUTED DATA STORAGE ENVIRONMENT
20220272151 · 2022-08-25 ·

Apparatus and method for performing real-time monitoring of server-side resources required to satisfy a client-side request in a distributed data storage environment, such as in a cloud computing or HPC (high performance computing) network. A client device is configured to issue a service request to carry out a service application associated with one or more server nodes. A request scheduler forwards the service request from the client device to a selected server node associated with the service request. A service log accumulates entries associated with data transfer operations carried out by the server node responsive to the service request over each of a succession of time periods. A service monitor accumulates, for each of the succession of time periods, information associated with the data transfer operations. A monitor tool aggregates the cumulative information to provide an indication of server-side resources utilized to satisfy the service request.

User-configured multi-location service deployment and scaling

Techniques are described for a location-aware service-oriented application deployment management (“SOADM”) service that abstracts the complexities of deploying distributed applications in a cloud provider network providing many possible deployment zones of one or multiple types. These deployment zone types can include traditional cloud provider regions and availability zones as well as so-called edge locations (e.g., cloud provider operated edge locations, customer-operated edge locations, third-party operated edge locations, communications service provider (CSP) associated edge locations). The SOADM service enables users to create service group configurations representing a service-oriented application, including its constituent services and dependent resources, and specify distribution strategies for deploying and/or redistributing application services and resources, among other configurations. Using such configurations, the SOADM service automatically deploys and scales simple or complex, single or multi-service applications for users across any number of deployment zones and deployment zone types.

Using edge-optimized compute instances to execute user workloads at provider substrate extensions

Techniques are described for enabling users of a service provider network to create and configure “application profiles” that include parameters related to execution of user workloads at provider substrate extensions. Once an application profile is created, users can request the deployment of user workloads to provider substrate extensions by requesting instance launches based on a defined application profile. The service provider network can then automate the launch and placement of the user's workload at one or more provider substrate extensions using edge-optimized compute instances (e.g., compute instances tailored for execution within provider substrate extension environments). In some embodiments, once such edge-optimized instances are deployed, the service provider network can manage the auto-resizing of the instances in terms of various types of computing resources devoted to the instances, manage the lifecycle of instances to ensure maximum capacity availability at provider substrate extension locations, and perform other instance management processes.

METHOD AND APPARATUS FOR LDPC TRANSMISSION OVER A CHANNEL BONDED LINK
20220109522 · 2022-04-07 ·

A particular overall architecture for transmission over a bonded channel system consisting of two interconnected MoCA (Multimedia over Coax Alliance) 2.0 SoCs (Systems on a Chip) and a method and apparatus for the case of a “bonded” channel network. With a bonded channel network, the data is divided into two segments, the first of which is transported over a primary channel and the second of which is transported over a secondary channel.

METHOD AND SYSTEM FOR TRAFFIC SCHEDULING
20220094633 · 2022-03-24 ·

A method and system for traffic scheduling are provided, wherein the method includes: preconfiguring policy routing in a router of a target node server; counting a current access traffic of each of a plurality of ports, and generating a traffic scheduling instruction based on the counted access traffic and the policy routing; and sending the traffic scheduling instruction to the target node server.