G06F2213/2414

TECHNIQUES FOR ISSUING INTERRUPTS IN A DATA PROCESSING SYSTEM WITH MULTIPLE SCOPES

A technique for handling interrupts in a data processing system includes receiving, by an interrupt routing controller (IRC), an event routing message (ERM) that includes an event source number for a notification source with an unserviced interrupt. In response to receiving the ERM, the IRC builds an event notification message (ENM) based on the event source number. The IRC determines a scope for the ENM based on an event target group (ETG) associated with the event source number. The IRC issues the ENM to an interrupt presentation controller (IPC) at the scope associated with the ETG.

MANAGING EFFICIENT SELECTION OF A PARTICULAR PROCESSOR THREAD FOR HANDLING AN INTERRUPT

A processing unit connected via a system fabric to multiple processing units calls a first single command in a bus protocol that allows sampling over the system fabric of the capability of snoopers distributed across the processing units to handle an interrupt. The processing unit, in response to detecting at least one first selection of snoopers with capability to handle the interrupt, calling a second single command in the bus protocol to poll the first selection of snoopers over the system fabric for an availability status. The processing unit, in response to detecting at least one second selection of snoopers respond with the available status indicating an availability to handle the interrupt, assigning a single snooper from among the second selection of snoopers to handle the interrupt by calling a third single command in the bus protocol.

Techniques for issuing interrupts in a data processing system with multiple scopes

A technique for handling interrupts in a data processing system includes receiving, by an interrupt routing controller (IRC), an event routing message (ERM) that includes an event source number for a notification source with an unserviced interrupt. In response to receiving the ERM, the IRC builds an event notification message (ENM) based on the event source number. The IRC determines a scope for the ENM based on an event target group (ETG) associated with the event source number. The IRC issues the ENM to an interrupt presentation controller (IPC) at the scope associated with the ETG.

TECHNIQUES FOR ISSUING INTERRUPTS IN A DATA PROCESSING SYSTEM WITH MULTIPLE SCOPES

A technique for handling interrupts in a data processing system includes receiving, by an interrupt routing controller (IRC), an event routing message (ERM) that includes an event source number for a notification source with an unserviced interrupt. In response to receiving the ERM, the IRC builds an event notification message (ENM) based on the event source number. The IRC determines a scope for the ENM based on an event target group (ETG) associated with the event source number. The IRC issues the ENM to an interrupt presentation controller (IPC) at the scope associated with the ETG.

TECHNIQUES FOR ISSUING INTERRUPTS IN A DATA PROCESSING SYSTEM WITH MULTIPLE SCOPES

A technique for handling interrupts in a data processing system includes receiving, by an interrupt routing controller (IRC), an event routing message (ERM) that includes an event source number for a notification source with an unserviced interrupt. In response to receiving the ERM, the IRC builds an event notification message (ENM) based on the event source number. The IRC determines a scope for the ENM based on an event target group (ETG) associated with the event source number. The IRC issues the ENM to an interrupt presentation controller (IPC) at the scope associated with the ETG.

Method and apparatus for allocating interruptions

The present disclosure relates to a method and an apparatus for allocating interruptions in a multi-core system. A method for allocating interruptions in a multi-core system according to one embodiment of the present disclosure comprises: an interrupt load extraction step of extracting interrupt loads of each interruption type; a step of extracting task loads of each core; a weighting factor determination step of determining weighting factors using a difference between task loads of the cores; a step of reflecting weighting factors to extract a converted value of the interrupt load; and an interruption allocation step of allocating interruption types to the cores such that the sums of the converted values of the interrupt loads allocated to each core and the allocated task loads are uniform. According to one embodiment of the present disclosure, interruptions can be allocated such that both task processing and interruption processing can be performed in an efficient manner.

HARDWARE PARTITIONS FOR A CLOUD SERVER

A system comprises one or more processor cores, including a management processor core that executes instructions to partition at least a portion of the one or more processor cores into one or more partitions. The system includes at least one distributed virtual memory (DVM) hub that obtains a first DVM message from a processor core; determines, based on a processor core identifier of the DVM message, one or more recipient processor cores for the first DVM message; and provides the first DVM message to the one or more recipient processor cores. The system includes one or more interrupt interposers that are each associated with a processor core and prevent an interrupt originating from the associated processor core from being provided to a processor core that is outside the partition of the associated processor core.

UTILIZING KERNEL THREAD DISPATCH LATENCY FOR INPUT/OUTPUT PROCESSING

Utilizing kernel thread dispatch latency for input/output processing is disclosed, including executing an interrupt handler in response to an interrupt raised by an I/O adapter; dispatching, by the interrupt handler, a kernel thread configured for I/O processing; processing, by the interrupt handler while waiting for an acknowledgement by the kernel thread, one or more I/O events of a plurality of I/O events in an I/O queue for the I/O adapter; transferring, by the interrupt handler, the I/O processing to the kernel thread in response to detecting the acknowledgement by the kernel thread; and processing, by the kernel thread, remaining I/O events in the I/O queue.