Patent classifications
G06F11/3017
Methods and systems for managing performance and power utilization of a processor employing a fully-multithreaded load threshold
A method for managing performance and power utilization of a processor in an information handling system (IHS) employing a balanced fully-multithreaded load threshold is disclosed. The method includes providing a maximum current thread utilization (Umax) and a minimum current thread utilization (Umin) among all current thread utilizations of the processor and determining a current performance state (P state) of the processor. The method also includes increasing a current P state of the processor to a next P state of the processor towards a maximum P state (Pmax) of the processor when the current P state of the processor is between Umax and Umin and the current utilization rate of the processor is less than a first threshold utilization rate. The method further includes engaging the processor in a turbo mode when the current P state of the processor reaches the Pmax and the current utilization of the processor is greater than the first threshold utilization rate of the processor.
Process prioritization for information handling systems
An information handling system may determine that a first process of a list of processes is a top-ranked process and may adjust one or more settings of the information handling system associated with the first process. The information handling system may monitor performance parameters of the information handling system following the adjustment of the settings. Based on monitoring the performance parameters, the information handling system may determine that a performance score of the information handling system is below a threshold performance score and may reduce a ranking of the first process based on the determination. The ranking of the first process may be reduced such that a second process becomes a new top-ranked process. The information handling system may then adjust one or more settings associated with the second process.
System and method for enhancing the efficiency of mainframe operations
A method includes monitoring a job being executed at the source mainframe. A job comprises multiple tasks. A method includes monitoring a particular task of the multiple tasks being executed at a source mainframe and determining an application required to execute the particular task. In response to determining that the particular task requires an application to execute, determining a target mainframe where the application is installed. A method further includes validating the environment of the target mainframe to confirm that the particular task can be executed using the target mainframe, and upon validating the target mainframe, redirecting the particular task to the target mainframe for execution. A method also includes monitoring the particular task being executed at the target mainframe and returning the results of the particular task from the target mainframe to the source mainframe.
WORKGROUP SYNCHRONIZATION AND PROCESSING
A processing system monitors and synchronizes parallel execution of workgroups (WGs). One or more of the WGs perform (e.g., periodically or in response to a trigger such as an indication of oversubscription) a waiting atomic instruction. In response to a comparison between an atomic value produced as a result of the waiting atomic instruction and an expected value, WGs that fail to produce a correct atomic value are identified as being in a waiting state (e.g., waiting for a synchronization variable). Execution of WGs in the waiting state is prevented (e.g., by a context switch) until corresponding synchronization variables are released.
Programmable framework for distributed computation of statistical functions over time-based data
Systems and methods are disclosed to implement a distributed query execution system that performs statistical operations on specified time windows over time-based datasets. In embodiments, the query system splits a statistical function into a set of parallel accumulator tasks that correspond to different portions of the dataset and/or function time windows. The accumulator tasks are executed in parallel by individual accumulator nodes to generate individual statistical result structures. The structures are then combined by an aggregator node to produce an aggregate result structure that indicates the results of the statistical function over the time windows. In embodiments, the accumulator and aggregator tasks are implemented and executed using a programmable task execution framework that allows developers to define custom accumulator and aggregator tasks. Advantageously, the query system allows queries with time-windowed statistical functions to be parallelized across a group of worker nodes and scaled to very large datasets.
DEBUG TRACE STREAMS FOR CORE SYNCHRONIZATION
The present disclosure provides for synchronization of multi-core systems by monitoring a plurality of debug trace data streams for a redundantly operating system including a corresponding plurality of cores performing a task in parallel; in response to detecting a state difference on one debug trace data stream of the plurality of debug trace data streams relative to other debug trace data streams of the plurality of debug trace data streams: marking a given core associated with the one debug trace data stream as an affected core; and restarting the affected core.
SERVER NETWORK RESOURCE REALLOCATION
A system and method for determining and generating a visualization of processor utilization is described. The system accesses a source data set that indicates processor utilization rates of a plurality of servers over a plurality of sampling periods. The system defines a target data set that includes a plurality of processor utilization range buckets corresponding to the plurality of sampling periods. The system updates the target data set based on the source data set. A graphical user interface (GUI) is generated based on the updated target data set and includes a stacked area chart indicating percentages of samples corresponding to the processor utilization range buckets over time. The system distributes, based on the updated target data set, a load of from a first server to a second server based on the processor utilization range bucket of the first server and the processor utilization range bucket of the second server.
THREAD MAPPING
There is provided a method for thread allocation in a multi-processor computing system. The method includes determining whether a thread for execution has a security requirement. The thread is allocated to one of a first processing unit or a second processing unit based on the determination. The thread is allocated for execution by the first processing unit based on the thread having the security requirement.
MULTI-THREADS TRACKING METHOD, MULTI-THREADS TRACKING SYSTEM FOR OPERATING SYSTEM AND ELECTRONIC DEVICE USING THE SAME
A multi-threads tracking method, a multi-threads tracking system for an operating system and an electronic device using the same are provided. The multi-threads tracking method of the operating system includes the following steps. At least two message queue access events among two threads and one message queue are intercepted. A thread identification, a process identification, an input value and a return value of each of the message queue access events are recorded. Based on a determination of a relationship among the thread identifications, the process identifications, the input values, and the return values of the message queue access events, an In-Process dependency among the threads and the message queue is established.
Information handling system physical component inventory to aid operational management through near field communication device interaction
NFC communications from a mobile phone to an information handling system initiates an inventory by a management controller of the information handling system. The inventory is provided to the mobile telephone with a second NFC communication so that an end user can see a visual depiction of the interior of the information handling system before opening the chassis of the system.