G06F9/505

Techniques for preventing concurrent execution of declarative infrastructure provisioners

Techniques for preventing concurrent execution of an infrastructure orchestration service are described. Worker nodes can receive instructions, or tasks, for deploying infrastructure resources and can provide heartbeat notifications to scheduler nodes, also considered a lease. A signing proxy can track the heartbeat notifications sent from the worker nodes to the scheduler node. The signing proxy can receive requests corresponding to a performance of the tasks assigned to the worker nodes. The signing proxy can determine whether the lease between each worker node and the scheduler is valid. If the lease is valid, the signing proxy may make a call to services on behalf of the worker node, and if the lease is not valid, the signing proxy may not make a call to services on behalf of the worker node. Instead, the signing proxy may cut off all outgoing network traffic, blocking access of the worker node to services.

Methods and systems for balancing loads in distributed computer networks for computer processing requests with variable rule sets and dynamic processing loads
11693709 · 2023-07-04 · ·

Methods and systems are described for balancing loads in distributed computer networks for computer processing requests with variable rule sets and dynamic processing loads. The methods and systems may include determining an initial allocation of the plurality of processing requests to the plurality of available domains that has a lowest initial sum excess processing load. The methods and systems may then retrieve an updated estimated processing load for at least one of the plurality of processing requests and determine a secondary allocation of the plurality of processing requests to the plurality of available domains.

Workload pool hierarchy for a search and indexing system

Resource management includes storing, for multiple workload pools of a data intake and query system, a workload pool hierarchy arranged in multiple workload pool layers. After storing a processing request is assigned a selected subset of workload pools in a second layer of the workload pool hierarchy based on a type of processing request. The processing request is then assigned to an individual workload pool in the selected subset to obtain a selected workload pool. Execution of the processing request is initiated on the selected workload pool.

ASSESSING SECURITY VULNERABILITIES IN CLOUD-NATIVE APPLICATIONS

According to some embodiments, a method is performed by a distributed cloud-native application. The method comprises receiving a request from a user to perform an operation. The user is associated with a risk profile. The method further comprises determining a call path through the distributed cloud-native application to perform the operation and classifying a risk level associated with the determined call path based on a distributed call graph. The distributed call graph comprises a risk value for each call path through the distributed cloud-native application and each call path comprises one or more distributed cloud-native application components. The risk value is based on a weakness rating associated with each component in the call path. The method further comprises determining the risk level associated with the determined call path is acceptable based on the risk profile associated with the user and performing the operation.

HARVESTING AND USING EXCESS CAPACITY ON LEGACY WORKLOAD MACHINES
20230004447 · 2023-01-05 ·

Some embodiments provide a novel method for deploying containerized applications. The method of some embodiments deploys a data collecting agent on a machine that operates on a host computer and executes a set of one or more workload applications. From this agent, the method receives data regarding consumption of a set of resources allocated to the machine by the set of workload applications. The method assesses excess capacity of the set of resources for use to execute a set of one or more containers, and then deploys the set of one or more containers on the machine to execute one or more containerized applications. In some embodiments, the set of workload applications are legacy workloads deployed on the machine before the installation of the data collecting agent. By deploying one or more containers on the machine, the method of some embodiments maximizes the usages of the machine, which was previously deployed to execute legacy non-containerized workloads.

CO-OPERATIVE AND ADAPTIVE MACHINE LEARNING EXECUTION ENGINES

Techniques for executing machine learning (ML) models including receiving an indication to execute an ML model on a processing core; determining a resource allocation for executing the ML model on the processing core; determining that a layer of the ML model will use a first amount of the resource, wherein the first amount is more than an amount of the resource allocated; determining that an adaptation may be applied to executing the layer of the ML model; executing the layer of the ML model using the adaptation, wherein executing the layer using the adaptation reduces the first amount of the resource used by the layer as compared to executing the layer without using the adaptation; and outputting a result of the ML model based on the executed layer.

Tenant-Level Monitoring
20230004432 · 2023-01-05 ·

Techniques are disclosed relating to monitoring behavior of a computing system shared by multiple tenants. In some embodiments, a computer cluster is maintained that hosts containers accessible to a plurality of tenants of the computer cluster. First telemetry data collected about a particular one of the plurality of tenants is received from a container hosted at a first of a plurality of servers of the computer cluster. The first telemetry data identifies the particular tenant's consumption of a resource provided by the container. In response to the computer cluster migrating the container from the first server to a second of the plurality of servers, second telemetry data collected about the particular tenant's consumption of the resource is received from the migrated container hosted at the second server. An analysis is performed of the first and second telemetry data to identify whether the particular tenant's consumption of the resource has changed.

Continuation analysis tasks for GPU task scheduling

Systems, apparatuses, and methods for implementing continuation analysis tasks (CATs) are disclosed. In one embodiment, a system implements hardware acceleration of CATs to manage the dependencies and scheduling of an application composed of multiple tasks. In one embodiment, a continuation packet is referenced directly by a first task. When the first task completes, the first task enqueues a continuation packet on a first queue. The first task can specify on which queue to place the continuation packet. The agent responsible for the first queue dequeues and executes the continuation packet which invokes an analysis phase which is performed prior to determining which dependent tasks to enqueue. If it is determined during the analysis phase that a second task is now ready to be launched, the second task is enqueued on one of the queues. Then, an agent responsible for this queue dequeues and executes the second task.

Methods, systems and computer program products for optimizing computer system resource utilization during in-game resource farming
11544115 · 2023-01-03 · ·

Disclosed are methods, systems and computer program products for optimizing computer system resource utilization during in-game resource farming. In some non-limiting embodiments or aspects, the present disclosure describes a method for optimizing computer system resource utilization during in-game resource farming, the method including detecting a gameplay state associated with an executing instance of a gaming application and based on the detected gameplay state selecting a gaming application mode from among a plurality of available gaming application modes. In some non-limiting embodiments or aspects, the method may also include implementing the selected gaming application mode for subsequent execution of the gaming application on the computing system.

Systems and methods for centralization and diagnostics for live virtual server performance data
11544098 · 2023-01-03 · ·

Methods and systems for diagnosis of live virtual server performance data are disclosed. In one embodiment, an exemplary method comprises receiving a request to assign a first role to at least one virtual server; configuring the virtual server to associate the first role with a first resource of the virtual server; modifying a database to include an identifier associated with the virtual server and an identifier of the first role assigned to the virtual server; receiving indications of first resource usage; mapping the first resource usage to the first role; storing the indications of first resource usage; associating a change in first resource usage with a corresponding first resource operation; modifying a user interface element for presentation on a web page to include the first resource usage; receiving a request for the web page from a user; and delivering the web page to a user interface.