H04L12/923

RESOURCE ALLOCATION METHOD AND RESOURCE ALLOCATION SYSTEM
20210392086 · 2021-12-16 ·

Techniques are provided that can perform appropriate resource allocation to systems such as chat bots and back-end-systems even when a user is engaged in conversations (such as text messages and speech messages) unrelated to service menus, without increasing the resources of the chat bots and back-end-systems. Means are provided for determining the allocation of chat bot and back-end-system resources based on the number of conversations associated with the service menu and the number of conversations unrelated to the service menu.

Allocating Resources
20210390481 · 2021-12-16 · ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for allocating resources. In one aspect, a method includes receiving, from a resource requester, a first request statement specifying a first computing resource, a first bid for the first computing resource, a total quantity of the first resource requested by the resource requester, and a minimum quantity of the first resource that the resource requester is willing to accept. A second request statement can be received from the resource requester that specifies a second bid for the second computing resource and a condition statement specifying that the second bid is only valid if the first computing resource will be allocated to the resource requestor. A determination can be made that allocation of the second computing resource and at least the minimum quantity of the first resource to the resource requestor will achieve a resource allocation objective.

ENHANCED REDEPLOYING OF COMPUTING RESOURCES
20210377184 · 2021-12-02 ·

Examples described herein relate to method, resource management system, and non-transitory machine-readable medium for redeploying a computing resource. Data related to a performance parameter corresponding to a plurality of computing resources deployed on a plurality of host-computing nodes may be received. The performance parameter is associated with one or both of: communication between computing resources of the plurality of computing resources, or communication of the plurality of computing resources with a network device. Further, for a computing resource of the plurality of computing resources, a candidate host-computing node is determined from the plurality of host-computing nodes based on the data related to the performance parameter and the computing resource may be redeployed on the candidate host-computing node.

SPLIT-BRAIN PREVENTION IN A HIGH AVAILABILITY SYSTEM DURING WORKLOAD MIGRATION
20210385164 · 2021-12-09 ·

In some embodiments, a method receives a control message from a second host. The control message includes a first address to use as a next hop to reach an active workload that has migrated to the second host from another host. The method reprograms a local route table to include a policy to send packets to check a liveness of the active workload with the next hop of the first address. A packet is sent from a standby workload to the active workload using the next hop of the first address to check the liveness of the active workload. The packet is encapsulated and sent between the first host and the second host using an overlay channel between a first endpoint of the overlay channel on the first host and a second endpoint of the channel on the second host.

SPECULATIVE RESOURCE ALLOCATION FOR ROUTING ON INTERCONNECT FABRICS
20210367905 · 2021-11-25 · ·

Methods and systems related to speculative resource allocation for routing on an interconnect fabric are disclosed herein. One disclosed method includes speculatively allocating a collection of resources to support a set of paths through an interconnect fabric. The method also includes aggregating a set of responses from the set of paths at a branch node on the set of paths. If a resource contention is detected, the set of responses will include an indicator of a resource contention. The method will then further include transmitting, from the branch node and in response to the indicator of the resource contention, a deallocate message downstream and the indicator of the resource contention upstream, and reallocating resources for the multicast after a hold period.

AGENT-BASED THROTTLING OF COMMAND EXECUTIONS
20210365283 · 2021-11-25 · ·

Disclosed herein are methods, systems, and processes to perform granular and selective agent-based throttling of command executions. A resource consumption threshold is allocated to an agent process that is configured to perform data collection tasks on a host computing device. A desired throttle is generated for the agent process based on the resource consumption threshold allocated to the agent process and execution of the agent process is controlled in polling intervals. For each polling interval, a current throttle level for the agent process is determined based on a run count and a skip count of the agent process, the agent process is suspended if the agent process is active and the current throttle is greater than the desired throttle level, and the agent process is resumed if the agent process is idle and the current throttle level is not greater than the desired throttle level.

Intelligent serverless function scaling
11184263 · 2021-11-23 · ·

A plurality of serverless function invocations are received. A quantity of serverless function invocations of the plurality of serverless function invocations that corresponds to a particular type of serverless function invocation are determined. A number of serverless functions are scaled at a determined rate in view of the quantity of serverless function invocations corresponding to the particular type of serverless function invocation.

CACHE ALLOCATION SYSTEM

Examples described herein relate to a network interface device comprising: a host interface, a direct memory access (DMA) engine, and circuitry to allocate a region in a cache to store a context of a connection. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on connection reliability and wherein connection reliability comprises use of a reliable transport protocol or non-use of a reliable transport protocol. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on expected length of runtime of the connection and the expected length of runtime of the connection is based on a historic average amount of time the context for the connection was stored in the cache. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on content transmitted and the content transmitted comprises congestion messaging payload or acknowledgement. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on application-specified priority level and the application-specified priority level comprises an application-specified traffic class level or class of service level.

Connection management service

A technology is described for determining a trust score or trust metric used to determine whether to allow a connection to a high risk destination. In one example of the technology, a connection request to initiate a connection with a destination over a network may be received at a communication service. The communication service may identify a customer account associated with the connection request and obtain a trust score that indicates whether the connection request associated with the customer account may be allowed. The communication service may determine whether the connection request associated with the customer account is allowed to initiate the connection with the destination based on the trust score, and initiate the connection with the destination over the network when the trust score allows the connection request.

NETWORK SERVICE MANAGEMENT DEVICE, NETWORK SERVICE MANAGEMENT METHOD, AND NETWORK SERVICE MANAGEMENT PROGRAM
20210344611 · 2021-11-04 ·

[Problem] A lead time for providing a network service can be shortened.

[Solution] A network service management apparatus 100 that uses resources included in a network functions virtualization infrastructure 140 to provide a network service includes an orchestrator 110 that defines resources that satisfy a resource requirement of a virtual network function constituting the network service and are allocated to the virtual network function, and reserves the resources, and a virtualized infrastructure manager 130 that secures the reserved resources, activates the virtual network function on the secured resources, and generates the network service. When the securing of the reserved resources fails, the orchestrator 110 re-reserves resources to replace the reserved resources, and the virtualized infrastructure manager 130 secures the re-reserved resources, and activates the virtual network functions on the secured resources.