G06F2209/486

Apparatus and method for performance state matching between source and target processors based on interprocessor interrupts
11775336 · 2023-10-03 · ·

Apparatus, method, and machine-readable medium to provide performance state matching between source and target processors based on inter-processor interrupts. An exemplary apparatus includes a target processor to execute a receiving task at a first performance level and a source processor to execute a sending task at a second performance level higher than the first performance level. The sending task is to store interrupt routing data indicating a pairing between the sending task and the receiving task into a memory location and that the sending task is to dispatch work to be processed by the receiving task. The apparatus further includes a performance management unit to detect the pairing between the sending task and the receiving task based on the interrupt routing data and responsively adjust the performance level of the target processor from the first performance level to the second performance level based, at least in part, on the pairing.

Memory module threading with staggered data transfers
11755507 · 2023-09-12 · ·

A method of transferring data between a memory controller and at least one memory module via a primary data bus having a primary data bus width is disclosed. The method includes accessing a first one of a memory device group via a corresponding data bus path in response to a threaded memory request from the memory controller. The accessing results in data groups collectively forming a first data thread transferred across a corresponding secondary data bus path. Transfer of the first data thread across the primary data bus width is carried out over a first time interval, while using less than the primary data transfer continuous throughput during that first time interval. During the first time interval, at least one data group from a second data thread is transferred on the primary data bus.

Parallel handling of a tree data structure for multiple system processes

The technology describes scanning tree data structures (trees) for multiple processes, at least partly in parallel. A service scans a tree from a beginning tree element to an ending tree element on behalf of a process; while scanning, another process can join in the scan at an intermediate tree element location (e.g., a key). For the subsequent process, the service scans the tree based on the intermediate location to the tree end, thereby visiting tree elements in parallel until the tree end, then continuing from the tree beginning element to the intermediate location for the subsequent process. The service basically completes a full carousel-type revolution for each process. One or more other processes can join an ongoing scan at any time, facilitating further parallel tree element visits, while still obtaining a full scan of the entire set of tree elements. The service handles changing tree versions during the scanning.

PIPELINE TASK VERIFICATION FOR A DATA PROCESSING PLATFORM
20230026126 · 2023-01-26 ·

A pipeline task verification method and system is disclosed, and may use one or more processors. The method may comprise providing a data processing pipeline specification, wherein the data processing pipeline specification defines a plurality of data elements of a data processing pipeline. The method may further comprise identifying from the data processing pipeline specification one or more tasks defining a relationship between a first data element and a second data element. The method may further comprise receiving for a given task one or more data processing elements intended to receive the first data element and to produce the second data element. The method may further comprise verifying that the received one or more data processing elements receive the first data element and produce the second data element according to the defined relationship.

PATCHING FOR CLOUD-BASED 5G NETWORKS
20230359453 · 2023-11-09 · ·

An automated patching system for a cloud-based network includes an instance of a network function running on cloud-based hardware. An agent retrieves a baseline and a list of installed software for the instance. A patching function is in communication with the agent and runs at a first scheduled time in response to a first maintenance window starting at the first scheduled time. The first maintenance window comprises a first target list of instances running in a first availability zone. The patching function also runs at a second scheduled time in response to a second maintenance window. The patching function polls the agent to add the instance to the first target list of the first maintenance window. A first task launched by the patching function runs a patching executable that applies a patch to the instance in response to the patch missing from the list of installed software.

Task Scheduling in a GPU Using Wakeup Event State Data

A method of scheduling tasks within a GPU or other highly parallel processing unit is described which is both age-aware and wakeup event driven. Tasks which are received are added to an age-based task queue. Wakeup event bits for task types, or combinations of task types and data groups, are set in response to completion of a task dependency and these wakeup event bits are used to select an oldest task from the queue that satisfies predefined criteria.

MEMORY MODULE THREADING WITH STAGGERED DATA TRANSFERS
20220342834 · 2022-10-27 ·

A method of transferring data between a memory controller and at least one memory module via a primary data bus having a primary data bus width is disclosed. The method includes accessing a first one of a memory device group via a corresponding data bus path in response to a threaded memory request from the memory controller. The accessing results in data groups collectively forming a first data thread transferred across a corresponding secondary data bus path. Transfer of the first data thread across the primary data bus width is carried out over a first time interval, while using less than the primary data transfer continuous throughput during that first time interval. During the first time interval, at least one data group from a second data thread is transferred on the primary data bus.

AUTO-RECOVERY FRAMEWORK

The present disclosure relates to computer-implemented methods, software, and systems for an automatic recovery job execution through a scheduling framework in a cloud environment. One or more recovery jobs are scheduled to be performed periodically for one or more registered service components included in a service instance running on a cluster node of a cloud platform. Each recovery job is associated with a corresponding service component of the service instance. A health check operation is invoked at a service component based on executing a recovery job at the scheduling framework corresponding to the service component. In response to determining that the service component needs a recovery measure based on a result from the health check operation, a recovery operation is invoked as part of executing a set of scheduled routines of the recovery job. Implemented logic for the recovery operation is stored and executed at the service component.

CROSS-COMPILATION, ORCHESTRATION, AND SCHEDULING FOR IN-MEMORY DATABASES AS A SERVICE
20220261280 · 2022-08-18 ·

In an example embodiment, a new solution is provided for an in-memory database provided in a cloud as a service that enables “job cross running” instead of “parallel job running.” Specifically, job scripts are clustered based on a shared service. A primary job script in the cluster is compiled and executed, but secondary job scripts in the cluster are not compiled until after the execution of the primary job script has begun. A mock library is inserted into each of the secondary job scripts to cause service calls for the shared service in the secondary job scripts to be replaced with mock service calls. The secondary job scripts are then scheduled and executed, and upon completion the primary job script is permitted to delete the shared service.

Adaptive program task scheduling to blocking and non-blocking queues

Techniques are disclosed relating to scheduling program tasks in a server computer system. An example server computer system is configured to maintain first and second sets of task queues that have different performance characteristics, and to collect performance metrics relating to processing of program tasks from the first and second sets of task queues. Based on the collected performance metrics, the server computer system is further configured to update a scheduling algorithm for assigning program tasks to queues in the first and second sets of task queues. In response to receiving a particular program task associated with a user transaction, the server computer system is also configured to select the first set of task queues for the particular program task, and to assign the particular program task in a particular task queue in the first set of task queues.