Patent classifications
G06F9/4818
Scheduling application tasks only on logical processors of a first set and operating system interferences in logical processors of a second set
A method, information processing system, and computer program product are provided for managing operating system interference on applications in a parallel processing system. A mapping of hardware multi-threading threads to at least one processing core is determined, and first and second sets of logical processors of the at least one processing core are determined. The first set includes at least one of the logical processors of the at least one processing core, and the second set includes at least one of a remainder of the logical processors of the at least one processing core. A processor schedules application tasks only on the logical processors of the first set of logical processors of the at least one processing core. Operating system interference events are scheduled only on the logical processors of the second set of logical processors of the at least one processing core.
Hot key throttling by querying and skipping task queue entries
Methods and apparatuses for scheduling tasks with a job scheduler are disclosed. In one embodiment, the method comprises: tracking a number of active tasks for each key of a plurality of keys; writing, by a scheduler, a query to identify a next scheduled task among a plurality of scheduled tasks ordered by time in a task queue, the query having an index that excludes tasks associated with a list of one or more keys of the plurality of keys that have a count of active tasks greater than a first limit associated with each key; querying, by a scheduler, the task queue using the query to identify the next scheduled task among the plurality of scheduled tasks, the next scheduled task being associated with a key not excluded by the query; and executing the next scheduled task.
Enhanced low-priority arbitration
A computing system may implement a low priority arbitration interrupt method that includes receiving a message signaled interrupt (MSI) message from an input output hub (I/O hub) transmitted over an interconnect fabric, selecting a processor to interrupt from a cluster of processors based on arbitration parameters, and communicating an interrupt service routine to the selected processor, wherein the I/O hub and the cluster of processors are located within a common domain.
Circuit for fast interrupt handling
A circuit for fast interrupt handling is disclosed. An apparatus includes a processor circuit having an execution pipeline and a table configured to store a plurality of pointers that correspond to interrupt routines stored in a memory circuit. The apparatus further includes an interrupt redirect circuit configured to receive a plurality of interrupt requests. The interrupt redirect circuit may select a first interrupt request among a plurality of interrupt requests of a first type. The interrupt redirect circuit retrieves a pointer from the table using information associated with the request. Using the pointer, the execution pipeline retrieves first program instruction from the memory circuit to execute a particular interrupt routine.
SYSTEMS AND METHODS FOR MANAGING INTERRUPT PRIORITY LEVELS
A system includes non-transitory computer readable memory and a processor. The non-transitory computer readable memory stores a current processor interrupt priority level and a current disable interrupt control (DISICTL) interrupt priority level. The processor to update the current processor interrupt priority level based on respective interrupt priority levels associated with respective exceptions, and update the current DISICTL interrupt priority level based on a respective DISICTL instruction, wherein the respective DISICTL instruction specifies a respective user-definable DISICTL interrupt priority level. The processor determines a highest interrupt priority level between the current processor interrupt priority level and the current DISICTL interrupt priority level, and apply the highest interrupt priority level during execution of respective code.
ACCESSING DATA IN ACCORDANCE WITH AN EXECUTION DEADLINE
A method begins by a processing module of a dispersed storage and task (DST) execution unit receiving a data request for execution by the DST execution unit, the data request including an execution deadline. The method continues with the processing module comparing the execution deadline to a current time. When the execution deadline compares unfavorably to the current time the method continues with the processing module generating an error response. When the execution deadline compares favorably to the current time the method continues with the processing module determining a priority level based on the deadline and executing the data request in accordance with the priority level.
Method and system for reducing message passing for contention detection in distributed SIP server environments
A method, a system, and a computer program product are provided for reducing message passing for contention detection in distributed SIP server environments. The method is implemented in a computer infrastructure having computer executable code tangibly embodied on a computer readable storage medium having programming instructions operable to determine that a first site is waiting for a first object locked by a second site. The programming instructions are further operable to determine that a third site is waiting for a second object locked by the first site, and to send a first probe to the second site to determine whether the second site is waiting. A second probe is received and indicates that a site is waiting for an object locked by the first site. The second probe further indicates a deadlock in a distributed server environment to be resolved.
Task allocations based on color-coded representations
Embodiments of the present invention provide a system for intelligently optimizing the utilization of clusters. The system is configured to continuously gather real-time hardware telemetric data associated with one or more entity systems via a hardware telemetric device, continuously convert the real-time hardware telemetric data into a first color coded representation, receive one or more tasks associated with one or more entity applications, queue the one or more tasks associated with the one or more entity applications, determine hardware requirements associated with the one or more tasks, determine one or more attributes associated with the one or more tasks, convert the hardware requirements and the one or more attributes of the one or more tasks into a second color coded representation, and allocate the one or more tasks to the one or more entity systems based on the first color coded representation and the second color coded representation.
Controlled interruption and resumption of batch job processing
This disclosure provides various embodiments of software, systems, and techniques for controlled interruption of batch job processing. In one instance, a tangible computer readable medium stores instructions for managing batch jobs, where the instructions are operable when executed by a processor to identify an interruption event associated with a batch job queue. The instructions trigger an interruption of an executing batch job within the job queue such that the executed portion of the job is marked by a restart point embedded within the executable code. The instructions then restart the interrupted batch job at the restart point.
Method and Device for Anonymous Page Management, Terminal Device, and Readable Storage Medium
A method, and a terminal device therefor. The method includes: monitoring whether a process of which a priority changes exists in the terminal device, and obtaining information indicating a priority change of each process; determining a target to-be-executed process according to the information indicating the priority change of each process; detecting whether a target anonymous page corresponding to the target to-be-executed process is stored in a swap space, wherein the swap space is configured to recycle anonymous pages; and prefetching the target anonymous page from the swap space in response to the target anonymous page being stored in the swap space.