H04L47/827

Logic scaling sets for cloud-like elasticity of legacy enterprise applications
11323389 · 2022-05-03 · ·

Methods, systems, and computer-readable storage media for determining, by an instance manager and from a pattern associated with a system executing within a landscape, that a status of the system is to change to scaled-in, the pattern being absent any reference to instances of systems executed within landscapes, in response, identifying, by the instance manager and from a logic scaling set that is associated with the system, one or more instances of the system that are able to be scaled-in, selecting, by the instance manager, at least one instance of the one or more instances, and executing, by the instance manager, scaling of the system based on the at least one instance.

Secure multi-tenant cloud subscription sharing

The disclosed techniques improve the efficiency and functionality of cloud services by providing a system for sharing individual subscriptions among multiple tenants. A cloud service provider utilizes a location-based manager to retrieve a pool of subscriptions from a cloud platform. Individual subscriptions within the pool can define a set of cloud resources for a resource unit such as a server farm. The location-based manager can assign one or multiple subscriptions for a resource unit to share amongst multiple tenants. In this way, security boundaries between individual tenants can be maintained while also dramatically reducing the number of subscriptions a cloud service provider must manage. In addition, by assigning subscriptions at the granularity of resource units rather than tenants, the location-based manager can enhance the security of the cloud platform by creating a logical zone about individual resource units to serve as an additional security boundary.

Server-side resource monitoring in a distributed data storage environment
11722558 · 2023-08-08 · ·

Apparatus and method for performing real-time monitoring of server-side resources required to satisfy a client-side request in a distributed data storage environment, such as in a cloud computing or HPC (high performance computing) network. A client device is configured to issue a service request to carry out a service application associated with one or more server nodes. A request scheduler forwards the service request from the client device to a selected server node associated with the service request. A service log accumulates entries associated with data transfer operations carried out by the server node responsive to the service request over each of a succession of time periods. A service monitor accumulates, for each of the succession of time periods, information associated with the data transfer operations. A monitor tool aggregates the cumulative information to provide an indication of server-side resources utilized to satisfy the service request.

Communication device and method for operating a communication system for transmitting time critical data
11316654 · 2022-04-26 · ·

A communication device and method for operating a communication system for transmitting time-critical data, wherein a respective individual time window within predefined time intervals is specified for data flows assigned to selected control applications running on terminals, where time windows each have an individual cycle time that is a multiple of a general cycle time or corresponds to the general cycle time, first and second communication devices each check, for the selected control applications, whether a specified time window is available for data transmission, where information about a beginning of the time window within the predefined time intervals is in each case transmitted to the terminal upon which the respective selected control application is executing in the event of an available time window, and where data flows that are assigned to selected control applications are each transmitted according to the information about the beginning of the individual time window.

DISTRIBUTED FAIR ALLOCATION OF SHARED RESOURCES TO CONSTITUENTS OF A CLUSTER
20220124049 · 2022-04-21 ·

Techniques are disclosed for allocating shared resources to nodes in a distributed computing network system. Nodes request a lock for each instance of a computing resource (e.g., a virtual IP address associated with a service provided by the distributed computing network system) from a distributed lock manager. The distributed lock manager maintains a queue of requests for each instance of the shared resource. Upon receiving a lock from the distributed lock manager, the receiving node performs a fairness allocation protocol to determine whether to accept the lock. If so determined, the shared computing resources associated with the lock is configured.

SYSTEM AND METHOD FOR FAST APPLICATION AUTO-SCALING

A resource management system is disclosed herein that quickly and dynamically tailors application resource provisioning to real-time application resource consumption. The resource management system may service application requests using resources selected from a pool of servers, the pool of servers including a mixture of virtual server resources and serverless instance resources. The serverless instance resource may comprise software objects programmed using a machine image reflecting one or more states of a virtual application server booted using application-specific program code. Supporting an application using serverless instances enables dynamic scaling of application resources to support real-time application servicing loads.

Method of risk-sensitive rate correction for dynamic heterogeneous networks

A dynamic heterogeneous network for transmitting media. The network has plural sources sending signals through various links and routers to plural destinations. Upon identifying a bottleneck link the network matches actual demand rate to actual service rate. A buffer setpoint is established to accommodate the difference between the demand rate and the service rate. The network determines an epoch having a penalty for deviation from the buffer setpoint. The rate allowance is reallocated to reduce the media bottleneck.

On-demand access to compute resources
11765101 · 2023-09-19 · ·

Disclosed are systems, methods and computer-readable media for controlling and managing the identification and provisioning of resources within an on-demand center as well as the transfer of workload to the provisioned resources. One aspect involves creating a virtual private cluster within the on-demand center for the particular workload from a local environment. A method of managing resources between a local compute environment and an on-demand environment includes detecting an event associated with a local compute environment and based on the detected event, identifying information about the local environment, establishing communication with an on-demand compute environment and transmitting the information about the local environment to the on-demand compute environment, provisioning resources within the on-demand compute environment to substantially duplicate the local environment and transferring workload from the local-environment to the on-demand compute environment. The event can be a threshold or a triggering event within or outside of the local environment.

System and method for optimizing deployment of a processing function in a media production workflow

A system is provided for optimizing deployment of a processing function in a media production workflow. The system includes a media production workflow generator that builds the media production workflow that includes the processing function and determines deployment criteria that includes an input dataset for the processing function and an atomic compute function for executing the processing function. Moreover, a deployment topology generator generates a topologies of the resources available in a cloud computing network and based on the determined deployment criteria, with the generated topologies indicating different configurations of resources for executing the processing function and a processor for executing the atomic compute function of the processing function. Furthermore, a deployment optimizer selects an optimal topology to deploy the processing function within the cloud computing network, with the optimal topology selected to include the processor for optimizing accessibility of electronic memory to execute the atomic compute function.

System and method for resource-aware and time-critical IoT frameworks

Methods and apparatus for resource optimization in Internet-of-Things (IoT) networks are presented. For instance, the disclosure presents an example method executed by a network node (106) in an IoT system (100). In some embodiments, the example method includes predicting (202) a likelihood that a future event will be detected by one or more IoT devices (102) in the IoT system (100) under different potential resource allocation and IoT device settings. The predicting (202) is conducted subject to resource availability constraints in some examples. In addition, the example method includes, based on the predicting (202), adapting (204) at least one of resource allocation and IoT device settings in the IoT system (100) for the future time. This adapting (204) is conducted such that the likelihood that the future event will be detected is maximized under the resource availability constraints according to a target optimization function.