G06F11/3017

Methods for scheduling multiple batches of concurrent jobs
11327788 · 2022-05-10 · ·

Exemplary embodiments include a method for scheduling multiple batches of concurrent jobs. The method includes: scheduling a plurality of batches where each batch has a plurality of jobs; identifying one or more dependencies via a configuration file, wherein the configuration file manages dependencies for each of the jobs of each batch; monitoring the one or more jobs; identifying and reporting one or more errors; and resolving the one or more errors by modifying one or more of hardware performance, CPU usage, memory consumption, database performance and/or other metrics to optimize system resource usage.

Exception analysis for data storage devices

A Data Storage Device (DSD) includes a memory for storing data, and a controller configured to execute firmware or code to perform a task. While performing the task, the controller is further configured to assign unique identifiers to respective firmware or code portions that are executed to perform the task, and create a list or data structure including the unique identifier assigned to the firmware or code portion that created the task. A unique identifier is added to the list or data structure for each firmware or code portion executed for the task. The list or data structure indicates the order in which the firmware or code portions are executed.

Performance monitoring for storage system with core thread comprising internal and external schedulers

A processing device monitors performance of a first thread of a first application executing on one of a plurality of processing cores of a storage system. The first thread comprises an internal scheduler controlling switching between a plurality of sub-threads of the first thread, and an external scheduler controlling release of the processing core by the first thread for use by at least a second thread of a second application different than the first application. In conjunction with monitoring the performance of the first thread in executing the first application, the processing device maintains a cumulative suspend time of the first thread over multiple suspensions of the first thread, with one or more of the multiple suspensions allowing at least the second thread of the second application to execute on the processing core, and generates performance measurements for sub-threads of the first thread using the cumulative suspend time.

Leveraging thermal profiles of processing tasks to dynamically schedule execution of the processing tasks

Embodiments relate to a system, program product, and method for leveraging thermal profiles of processing tasks to dynamically schedule execution of the processing tasks. Thermal profiles of the processing tasks are generated, where the thermal profiles include core hardware and core processing measurements and predictions of thermal performance based on the measurements. The execution of the processing tasks are scheduled in processing devices to mitigate a potential for reducing a margin to a hardware thermal limit.

Server, electronic device, and control method thereof

An electronic apparatus is provided. The electronic apparatus includes a memory storing a program including a plurality of program codes and performing a predetermined function; a communicator configured to receive policy information including target information and determination information from an external server; and a processor configured to perform the predetermined function using the program and, when a program code corresponding to the target information is executed while performing the predetermined function, compare an execution result of the program code with the determination information and determine whether the program code normally operates.

Methods, systems, articles of manufacture, and apparatus to optimize thread scheduling

An apparatus comprising: a model to generate adjusted tuning parameters of a thread scheduling policy based on a tradeoff indication value of a target system; and a workload monitor to: execute a workload based on the thread scheduling policy; obtain a performance score and a power score from the target system based on execution of the workload, the performance score and the power score corresponding to a tradeoff indication value; compare the tradeoff indication value to a criterion; and based on the comparison, initiate the model to re-adjust the adjusted tuning parameters.

Indexing and replaying time-travel traces using diffgrams
11163665 · 2021-11-02 · ·

Utilizing diffgrams for trace indexing and replay. A subset of instructions of a trace, beginning with a first instruction and ending with a second instruction, are replayed to obtain state of one or more named resources. Based on replaying the subset of instructions, a diffgram is generated, which is structured such that addition of the diffgram at the first instruction brings the one or more named resources to the second state, and subtraction of the diffgram at the second instruction brings the one or more named resource to the first state. A pat of reaching a target instruction, the diffgram is later added at the first instruction to restore the second state at the second instruction, or subtracted at the second instruction to restore the first state of the first instruction.

Multiple operation modes in backup operator

Some embodiments include a system, method, and non-transitory medium, with the system including a plurality of database services; and a stateless backup operator to perform a first mode for each database service to reconfigure a current backup state of each database service with an associated desired backup state information for the respective database service and the stateless backup operator to switch to and perform a second mode for a specific one of the plurality of database services in response to a request execute a first backup operation for the specific database service.

IMPLEMENTING A POLICY-BASED AGENT WITH PLUGIN INFRASTRUCTURE
20230135013 · 2023-05-04 ·

The present disclosure relates to implementing, updating, and managing operation of a client agent on a client device (e.g., computing device, virtual device) in a way that enables isolation of features and functionality while also allowing the client agent to self-heal and intelligently update discrete features thereon. The client agent includes a collection of plugins that are isolated and run in accordance with respective plugin policies. The client agent makes use of device-level, agent-level, and plugin-level health monitors that collectively monitor a health status of discrete components of the client agent in a way that enables the client agent to selectively discontinue scheduling certain plugins without interrupting functionality of other plugins or of the client agent as a whole. Indeed, features described herein enable the client agent to intelligently update and self-heal with respect to individual plugins based on information obtained by the respective health monitors.

Systems and methods for multi-event correlation
11809920 · 2023-11-07 · ·

Provided herein are systems and methods for multi-event correlation. Receiving a stream of events, each leaf rule engine may detect a plurality of events from the stream that matches a characteristic for the leaf rule engine. Each leaf rule engine may identify, from the plurality of events and within a time window, a group of events that satisfies a condition for the respective leaf rule engine. A root conditions engine may receive a stream of leaf events corresponding to the group of events identified by each leaf rule engine. The root conditions engine may identify, from the received stream of leaf events and within a root time window, a collection of events that satisfies a condition for the root conditions engine. A trigger may execute an action according to the collection of events identified within the root time window.