Patent classifications
G06F2209/5018
PERFORMANCE ISLANDS FOR CPU CLUSTERS
Embodiments include an asymmetric multiprocessing (AMP) system having two or more central processing unit (CPU) clusters of a first core type and a CPU cluster of a second core type. Some embodiments include determining a control effort for an active thread group, and assigning the thread group to a first performance island according to the control effort range of the first performance island. The first performance island can include a first CPU cluster of the first core type, where a second performance island includes a second CPU cluster of the first core type, where the second performance island corresponds to a different control effort range than the first performance island. Some embodiments include assigning the first CPU cluster as a preferred CPU cluster of the first thread group, and transmitting a first signal identifying the first CPU cluster as the preferred CPU cluster assigned to the first thread group.
SIMULTANEOUS-MULTI-THREADING (SMT) AWARE PROCESSOR ALLOCATION FOR CLOUD REAL-TIME WORKLOADS
An example system includes a processor and a node agent executing on the processor. The node agent is configured to receive a message indicative of a workload, a processor policy of the workload, and a number of processor threads requested for the workload. The node agent is also configured to allow simultaneous allocation of a processor core to the workload and another workload based on the processor policy being a first policy. The node agent is also configured to prevent simultaneous allocation of the processor core to the workload and the other workload based on the processor policy being a second policy or a third policy. The node agent is also configured to allow simultaneous allocation of the processor core for two or more of the requested processor threads based on the processor policy being the second policy. The node agent is also configured to prevent simultaneous allocation of the processor core for more than one of the requested processor threads based on the processor policy being the third policy.
Efficient worker utilization
Techniques are disclosed for efficient utilization worker threads in a workflow-as-a-service (WFaaS) environment. A client device may request a workflow for execution by the client device. The client device may receive the requested workflow and initialize a set of worker threads to execute the workflow and a set of heartbeater threads to monitor the set of worker threads. Upon receiving an indication of a processing delay, the client device may capture the state of the workflow, suspend execution of the workflow, and store the workflow in a temporary queue. While the processing delay persists, the client device may use the set of worker threads to execute other tasks. When the processing delay terminates, the client device may resume execution of the workflow.
INTELLIGENT PROCESSING OF TASKS IN AN ELASTIC CLOUD APPLICATION
Computer-readable media, methods, and systems are disclosed for optimizing processing across a plurality of processing resources using one or more leader threads to assign processing tasks to available processing threads across a plurality of application instances. The one or more leader threads monitor the status and availability of the processing threads and a processing load across the plurality of application instances to efficiently assign processing tasks and distribute the processing load across the processing threads of the application instances.
KERNEL OPTIMIZATION AND DELAYED EXECUTION
A kernel comprising at least one dynamically configurable parameter is submitted by a processor. The kernel is to be executed at a later time. Data is received after the kernel has been submitted. The at least one dynamically configurable parameter of the kernel is updated based on the data. The kernel having the at least one updated dynamically configurable parameter is executed after the at least one dynamically configurable parameter has been updated.
METHOD AND SYSTEM FOR MULTI-CORE LOAD SCHEDULING IN AN OPERATING SYSTEM (OS) LESS COMMUNICATION NETWORK
A method and system for multi-core load scheduling in an operating system (OS) less communication network is disclosed. The method comprises initializing a plurality of threads for processing corresponding functionalities of incoming packets. The method further comprises synchronizing the plurality of initialized threads with each other for simultaneous processing of the one or more incoming packets. The method further comprises determining central processing unit (CPU) load on each of the plurality of cores and an ingress data-rate of one or more incoming data packets. The method further comprises enabling or disabling at least one flag based on the determined load and the ingress data-rate and determining at least one thread based on the enabled flag by the RL agent. The method further comprises processing the corresponding functionalities associated with the one or more incoming packets based on the at least one determined thread.
Blockchain transaction processing systems and methods
Disclosed are computer-implemented methods, non-transitory computer-readable media, and systems for processing blockchain transactions. One computer-implemented method includes receiving M blockchain transactions and executing N blockchain transactions out of the M blockchain transactions in parallel using K threads of a first thread pool. A second thread pool is dedicated for accessing blockchain data stored in a storage system. For blockchain transactions distributed to each one of the K threads, one or more coroutines are used for each blockchain transaction so that the blockchain transactions are executed asynchronously using the coroutines. A blockchain block is generated to include the M blockchain transactions and added to a blockchain stored in the storage system.
Method, device and computer program product for data backup
Embodiments of the present disclosure relate to a method for data backup. The method includes obtaining an attribute value associated with a backup task to be run, the backup task being used for backing up data on a client terminal to a server through a network, the attribute value including a value of at least one of an attribute of the client terminal, an attribute of the server, and an attribute of the network; determining, based on the attribute value, the number of threads to be used to perform the backup task on the client terminal; and causing the client terminal to perform the backup task using the number of threads to back up the data.
Systems and methods for detecting and filtering function calls within processes for malware behavior
Systems and methods for monitoring a process a provided. An example method commences with providing a management platform. The management platform is configured to receive user rules for processing at least one function call within the process. A high-level script can be used based on the user rules to develop and install at least one library to execute synchronously within the process. The at least one library can be configured to monitor the process for at least one function call and capture argument values of the function call before the argument values are passed to a function. The at least one library can filter the function call based at least in part on the argument values. The method can continue with selectively creating an API event for execution by a dedicated worker thread. The execution of the API event is performed asynchronously with regard to the process.
Throttling and limiting thread resources of service computing platform
Systems and techniques are provided for monitoring and managing the performance of services accessed by sites on a computing platform. When a performance issue is identified, a service is monitored to determine if calls to the service exceed a threshold completion time. If so, a resource available to call the service is adaptively throttled by the platform.