Patent classifications
H04L12/915
Systems and methods for computing infrastructure resource allocation
Embodiments include a resource allocation system for managing execution of a computing task by a hierarchically-arranged computing infrastructure. In embodiments, the resource allocation system can comprise a resource map, an index processor, and an allocation manager. The resource map can include data elements that are associated with each service provider, including parent-child relationships. Workloads can be assigned to providers based on one or more optimization indexes calculated for each service provider based on a plurality of level-specific performance metrics received from one or more monitoring engines.
Method and device for data shunting
A method and device for data shunting and related to communications are provided. The method includes: determining, by the first network device of the first network, the second Quality of Service parameter of the second network according to the first Quality of Service parameter of data to be transmitted in the first network; transmitting, by the first network device, some or all of data to be transmitted to the second network device of the second network according to the second Quality of Service parameter. According to the application, the shunted data can be transmitted in the shunting network based on appointed Service Quality, thereby the requirements for the Service Quality can be satisfied, and the reliability of data transmission and the system resource utilization are increased.
Efficient buffer utilization for network data units
Approaches, techniques, and mechanisms are disclosed for efficiently buffering data units within a network device. A traffic manager or other network device component receives Transport Data Units (TDUs), which are sub-portions of Protocol Data Units (PDUs). Rather than buffer an entire TDU together, the component divides the TDU into multiple Storage Data Units (SDUs) that can fit in SDU buffer entries within physical memory banks. A TDU-to-SDU Mapping (TSM) memory stores TSM lists that indicate which SDU entries store SDUs for a given TDU. Physical memory banks in which the SDUs are stored may be grouped together into logical SDU banks that are accessed together as if a single bank. The TSM memory may include a number of distinct TSM banks, with each logical SDU bank having a corresponding TSM bank. Techniques for maintaining inter-packet and intra-packet linking data compatible with such buffers are also disclosed.
Service resource management system and method thereof
A service resource management system, including: a cloud data unit for storing a resource collected from a cloud service; a service group management unit for allocating the resource included in the cloud data unit to set a service group for providing a service; and a service group automatic generation module for automatically generating the service group for classifying the resource according to the setting thereof with the name including a key value or a tag value.
Differentiated routing system and method
A differentiated routing system is provided for routing a communication service according to an access point of a subscriber terminal to a first network domain. The system includes a computing system executing a core routing engine (CRE) that receives a request for a communication service from the subscriber terminal. When the communication service is to be routed to a second network domain, the CRE identifies an access point at which the subscriber terminal accesses the first network domain, includes a tag in the request according to the identified access point. The tag includes information to be used by the second network domain for routing the communication service. The CRE then transmits the request to the second network domain.
Excess bitrate distribution based on quality gain
A method provides for delivering video content from a server to a plurality of media devices is disclosed that distributes accurately excess bandwidth. The method includes: determining, by the server, the bandwidth to allocate to each of the plurality of media devices using a hypertext transfer protocol-based live streaming client model or a need parameter vector and/or measured bandwidth limitations associated with each of the plurality of media devices and providing the allocated bandwidth to each of the plurality of media devices, wherein the video content is transmitted in a plurality of segments from the server, and wherein each segment is transmitted at a bitrate that may vary from segment to segment.
OPTIMIZING FAULT TOLERANCE ON EXASCALE ARCHITECTURE
Methods and apparatus for optimizing fault tolerance on HPC (high-performance computing) systems including systems employing exascale architectures. The method and apparatus implement one or more management/service nodes in a management/service node layer and a plurality of sub-management nodes in a sub-management node layer. The sub-management nodes implement redundant cross-connected software components in different sub-layers to provide redundant channels. The redundant software components in a lowest sub-layer are connected to switches in racks containing multiple service nodes. The sub-management nodes are configured to employ the multiple redundant channels to collect telemetry data and other data from the service nodes such that the system continues to collect the data in the event of a failure in a software component or hardware failure.
Resource Reservation Method And Related Device
The present disclosure relates to resource reservation methods. One example method includes receiving, by a controller, a resource reservation request that is of a communication session and that is sent by a sending device, where the resource reservation request carries resource requirement information, obtaining, by the controller based on the resource reservation request, identification information of a network device through which data transmission of the communication session to be performed between the sending device and a receiving device passes, and a resource index of the network device, sending, by the controller, the resource requirement information and the resource index to the network device based on the identification information, where the resource requirement information and the resource index are used to instruct the network device to configure a resource for the communication session, and sending, by the controller, the identification information and the resource index to the sending device.
Network bandwidth reservations for system traffic and virtual computing instances
Virtual computing instances are provisioned with network resource allocation constraints, which may include hard constraints that must be met in order for the virtual computing instances to be created in a host server. Network resources from multiple hosts may be pooled in a virtual switch, and a cloud management system (CMS) may ensure that a network bandwidth reservation for a new virtual computing instance can be accommodated by network bandwidth in the pool that is reserved for communication endpoint traffic. In addition to such CMS-level constraint enforcement, techniques disclosed herein may also enforce network bandwidths constraints at the host level to guarantee that network bandwidth reservation requirements for communication endpoint(s) of a new virtual computing instance can be satisfied by a particular host before creating the virtual computing instance in that host.
Enabling IP carrier peering
Methods and systems may provide carrier ENUM based routing for subscriber devices (e.g., voice or other multimedia services over IP) to locate and to connect to subscriber devices of another IP peering carrier. A private ENUM database may be used to connect subscribers of disparate carriers using a domain for designated breakout gateway control functions.