G06F2209/5021

Queue Management System and Method
20220350645 · 2022-11-03 ·

A method, computer program product, and computing system for: receiving a new request on an IT computing device; and determining whether the new request on the IT computing device should be immediately processed or queued in a pending queue for subsequent processing based, at least in part, upon: a root request limit of the IT computing device, a global request limit of the IT computing device, and a sibling status of the new request.

Dynamic capacity optimization for shared computing resources segmented into reservation zones

Systems, methods, devices, and other techniques for managing a computing resource shared by a set of online entities. A system can receive a request from a first online entity to reserve capacity of the computing resource. The system determines a relative priority of the first online entity and identifies a reservation zone that corresponds to the relative priority of the first online entity. The system determines whether to satisfy the request based on comparing (i) an amount of the requested capacity of the computing resource and (ii) an amount of the portion of unused capacity of the computing resource designated by the reservation zone that online entities having relative priorities at or below the relative priority of the first online entity are permitted to reserve.

ALLOCATION OF HETEROGENEOUS COMPUTATIONAL RESOURCE
20220342711 · 2022-10-27 · ·

Allocation of computational resource to requested tasks is achieved by running a scheduling operation across a plurality of schedulers, each in communication with a subset of network entities, the schedulers establishing a virtual bus. In certain embodiments, the scheduling operation is able to run continuously, allocating newly arriving task requests as resources become available.

METHOD FOR THE DEPLOYMENT OF A SOFTWARE MODULE IN A MANUFACTURING OPERATION MANAGEMENT SYSTEM
20230082523 · 2023-03-16 ·

A software module is deployed in a MOM system without requiring the operator to know where to deploy the software module within the network of the computational resources that are addressed and/or accessed within the MOM system. A number of software modules are provided, each including a set of metadata with a number of deploy criteria. A plurality of computational resource layers are provided, with each resource layer having different computational resources and being enabled to communicate layer specific data, which include resource availability information. A deployment instance is executed that evaluates the metadata and the layer specific data and, depending on the evaluation, the computational resource layer and the computational resource on which the software module will be deployed is determined. The software module is then executed on the determined computational resource within the determined computational resource layer.

SYSTEMS AND METHODS FOR LOW LATENCY ANALYTICS AND CONTROL OF DEVICES VIA EDGE NODES AND NEXT GENERATION NETWORKS

A computing architecture providing for rapid analysis and control of an environment via edge computing nodes is disclosed. Input data streams may be captured via one or more data stream independent CPU threads and prepared for processing by one or more machine learning models. The machine learning models may be trained according to different use cases to facilitate a multi-faceted and comprehensive analysis of the input data. The evaluation of the input data against the machine learning models may be facilitated via independent GPU threads (e.g., one thread per model or use case) and the outputs of the models may be evaluated using control logic to produce a set of outcomes and control data. The control data may be utilized to generate one or more command messages that may provide feedback to a remote device or user regarding a state of a monitored environment or other observed condition.

Background processing in a web browser
11481237 · 2022-10-25 · ·

A system for executing a software program comprises: a display device for displaying a web based GUI of the software program; and a hardware processor adapted for executing in a web browser a code comprising: executing, in a worker thread that is not a primary thread executing code implementing the web based GUI, a client instruction identified in the primary thread for background processing; while the worker thread executes: displaying in a graphical object of the web based GUI data retrieved from a data structure associated with an outcome of executing the client instruction, where the data structure contains temporary data; and modifying another graphical object of the web based GUI in response to a user instruction received by a user selecting a selectable object of the web based GUI; and modifying the graphical object of the web based GUI when the contents of data structure is modified.

SYSTEMS AND METHODS FOR AUTOSCALING INSTANCE GROUPS OF COMPUTING PLATFORMS
20230129338 · 2023-04-27 ·

Systems and methods scale an instance group of a computing platform by determining whether to scale up or down the instance group by using historical data from prior jobs wherein the historical data includes one or more of: a data set size used in a prior related job and a code version for a prior related job. The systems and methods also scale the instance group up or down based on the determination. In some examples, systems and methods scale an instance group of a computing platform by determining a job dependency tree for a plurality of related jobs, determining runtime data for each of the jobs in the dependency tree and scaling up or down the instance group based on the determined runtime data.

COOLING-POWER-UTILIZATION-BASED WORKLOAD ALLOCATION SYSTEM

A cooling-power-consumption-based workload allocation system includes a workload allocation system coupled to at least one client device and a plurality of server devices. The workload allocation system receives a first workload request that identifies a first workload from the at least one client device, and determines a first workload priority of the first workload relative to a second workload priority of each second workload being performed by the plurality of server devices. Based on the first workload priority of the first workload relative to the second workload priority of each second workload and a cooling-power-utilization-efficiency ranking of each of the plurality of server devices, the workload allocation system identifies a first server device included in the plurality of server devices for performing the first workload, and causes the first server device to perform the first workload.

CASCADED PRIORITY MAPPING
20230076061 · 2023-03-09 ·

Approaches for scheduling a set of tasks at compute nodes within a cluster computing environment based on a priority, are described, In an example, a cascaded priority mapping comprising cascaded priority value nodes, wherein the priority value nodes correspond to the set of tasks that are to be scheduled. Each of the priority value nodes specify a priority value attributed to respective tasks from amongst the set of tasks.

DISTRIBUTED SYSTEM WORKLOAD MANAGEMENT VIA NODE AUTONOMY
20230072962 · 2023-03-09 ·

A system may include a memory and a processor in communication with the memory. The processor may be configured to perform operations. The operations may include calculating a priority factor with a node autonomous center in a node and computing a node service capability with the node autonomous center. The operations may further include selecting, with the node autonomous center, a task based on the priority factor and the node service capability. The operations may further include directing the task to the node.