In-node aggregation and disaggregation of MPI alltoall and alltoallv collectives
10521283 ยท 2019-12-31
Assignee
Inventors
Cpc classification
G06F9/542
PHYSICS
International classification
Abstract
An MPI collective operation carried out in a fabric of network elements by transmitting MPI messages from all the initiator processes in an initiator node to designated ones of the responder processes in respective responder nodes. Respective payloads of the MPI messages are combined in a network interface device of the initiator node to form an aggregated MPI message. The aggregated MPI message is transmitted through the fabric to network interface devices of responder nodes, disaggregating the aggregated MPI message into individual messages, and distributing the individual messages to the designated responder node processes.
Claims
1. A method of communication, comprising the steps of: executing a group of processes in a fabric of nodes comprising Network Interface Controllers (NICs), the NICs having respective NIC Communicator Controllers (NCCs); transmitting messages from all the processes to all the processes of the group of processes via the NICs by performing a Message Passing Interface (MPI) collective operation, wherein the nodes of the fabric function concurrently as initiator nodes executing respective initiator processes and as responder nodes executing respective responder processes, and wherein the collective operation comprises transmitting MPI messages through the fabric from all the initiator processes to all of the responder processes; with the NCCs of the initiator nodes combining respective payloads of the MPI messages to form an aggregated MPI message; transmitting the aggregated MPI message through the fabric to the responder nodes; in respective NCCs of the responder nodes disaggregating the aggregated MPI message into individual messages; and distributing the individual messages to the responder processes.
2. The method according to claim 1, wherein the aggregated MPI message has exactly one transport header that comprises a destination address of the aggregated MPI message.
3. The method according to claim 1, wherein the MPI messages comprise respective MPI headers comprising designations of the responder processes, wherein the responder processes are referenced in an MPI communicator object.
4. The method according to claim 3, wherein the MPI collective operation comprises forwarding by a communication library the MPI communicator object and the payloads to the NICs of the initiator nodes.
5. The method according to claim 1, further comprising: maintaining a communicator context in the NCCs of the initiator nodes, wherein transmitting the aggregated MPI message comprises directing the aggregated MPI message to local identifiers (LIDs) in the responder nodes according to the communicator context.
6. The method according to claim 1, comprising forming the aggregated MPI message by assembling pointers to message data, and including respective local identifier addresses for the message data in the aggregated MPI message.
7. An apparatus of communication, comprising: a fabric of nodes configured for executing a group of processes; respective network interface controllers (NICs) in the nodes; and respective NIC Communicator Controllers (NCCs) in the NICs, wherein the nodes are operative for transmitting messages from all the processes to all the processes of the group of processes via the NICs by performing a Message Passing Interface (MPI) collective operation, wherein the nodes of the fabric function concurrently as initiator nodes executing respective initiator processes and as responder nodes executing respective responder processes, and wherein the collective operation comprises transmitting MPI messages through the fabric from all the initiator processes to all of the responder processes; and wherein the NCCs of the initiator nodes are configured for combining respective payloads of the MPI messages to form an aggregated MPI message, and the NCCs of the responder nodes are configured for disaggregating the aggregated MPI message into individual messages, and wherein the responder nodes are operative for distributing the individual messages to the responder processes.
8. The apparatus according to claim 7, wherein the aggregated MPI message has exactly one transport header that comprises a destination address of the aggregated MPI message.
9. The apparatus according to claim 7, wherein the MPI messages comprise respective MPI headers containing designations of the responder processes, wherein the responder processes are referenced in an MPI communicator object.
10. The apparatus according to claim 9, wherein the MPI collective operation comprises forwarding by a communication library the MPI communicator object and the payloads to the NCCs of the initiator nodes.
11. The apparatus according to claim 7, wherein the first communicator controller circuitry is operative for forming the aggregated MPI message by assembling pointers to message data, and including respective local identifier addresses for the message data in the aggregated MPI message.
12. The apparatus according to claim 7, wherein the NCCs are operative for maintaining a communicator context and transmitting the aggregated MPI message by directing the aggregated MPI message to local identifiers (LIDs) in the responder nodes according to the communicator context.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
(1) For a better understanding of the present invention, reference is made to the detailed description of the invention, by way of example, which is to be read in conjunction with the following drawings, wherein like elements are given like reference numerals, and wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION OF THE INVENTION
(10) In the following description, numerous specific details are set forth in order to provide a thorough understanding of the various principles of the present invention. It will be apparent to one skilled in the art, however, that not all these details are necessarily always needed for practicing the present invention. In this instance, well-known circuits, control logic, and the details of computer program instructions for conventional algorithms and processes have not been shown in detail in order not to obscure the general concepts unnecessarily.
(11) Documents incorporated by reference herein are to be considered an integral part of the application except that, to the extent that any terms are defined in these incorporated documents in a manner that conflicts with definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
(12) Definitions.
(13) A switch fabric or fabric refers to a network topology in which network nodes interconnect via one or more network switches (such as crossbar switches), typically through many ports. The interconnections are configurable such that data is transmitted from one node to another node via designated ports. A common application for a switch fabric is a high performance backplane.
(14) System Architecture.
(15) Reference is now made to
(16) Reference is now made to
(17) Reference is now made to
(18) At initial step 40 an MPI alltoall or alltoallv collective operation is initiated by the host (not shown) of NIC 28. Next, at step 42 processes 36 (P_1 through P_P) commit their entire payloads to NIC 28. The payloads in this context are composed of all of the messages (including MPI headers) originated by the processes 36 to other processes in the communicator. These messages are referred to herein as MPI messages.
(19) After all local processes in the communicator have committed their alltoall payloads, at step 44 NIC 28 assembles a single message to each of the nodes in the communicator, referred to herein as an aggregated message. Reference is now made to
(20) Reverting to
(21) Reference is now made to
(22) Any number of MPI processes 66 execute in the node 62. In this example all the MPI processes 66 are members of the same communicator. Instances of a communication software library 68 translate MPI commands of the MPI processes 66 into corresponding driver commands for a NIC driver 70. In an InfiniBand implementation, the MPI processes 66 translate the MPI commands into InfiniBand verb functions. The NIC driver 70 itself is a software library, which translates the driver commands issued by the library 68 into hardware commands that are acceptable to a network interface card 72. In an InfiniBand implementation the commands may be work queue elements (WQEs). Data aggregation and disaggregation (steps 44, 58;
(23) Reference is now made to
(24) Reference is now made to
(25) Reference is now made to
(26) Next, at step 140 the communicator contexts 116, 118, 120 are initialized on their respective NICs 104, 106, 108, with the corresponding fields that describe the communicator and are associated with respective queue pairs. For example, on LID 7 in NIC 104, the local MPI process queue pairs are queue pairs 122, 124 and the remote LIDs are LID 5 and LID 12 in NIC 106 108, respectively. LID 12 in NIC 108 contains two MPI processes 100, 102.
(27) Next, at step 142 the MPI alltoall function is invoked by all of the local MPI processes of the node 86.
(28) Next, at step 144 the communicator and the alltoall payload are forwarded to the NIC 104 by the communication library. In an Infiniband implementation, step 144 is comprises posting work queue element 126 to queue pair 122, 124, which, as noted above, includes data pointer 128 to the payload data in block 130. In the example of
(29) Next, at delay step 146 the NCCs 110, 112, 114 in the NICs 104, 106, 108 wait for all of the MPI processes to commit their alltoall payloads. For example, NIC 104 waits for queue pair 122, 124 to post the work queue element 126.
(30) After all local processes have committed their data, at step 148 the NCC 110 assembles the data pointers and creates a single aggregated message, which is directed to the LIDS in the remote NICs 106, 108 according to the communicator context. The NCC 110 is aware of the organization of the alltoall data, and thus which data belong to which LID. In an InfiniBand implementation, the NCC 110 may use a different queue pair from the queue pairs of the local processes to transmit the data. The NCC 110 may also add an extra header to the aggregated message in order to identify the communicator on which the alltoall operation is performed.
(31) In the above example, queue pair 132 is used to send the data, and the message transfer comprises two messages: one message to LID 5 in NIC 106 containing alltoall data for remote process 98 and one message to LID 12 in NIC 108 containing alltoall data for the remote process 100, 102.
(32) The aggregated message is transmitted at step 150 When the aggregated message arrives at its destinations, the communicator contexts 118, 120 are fetched again at step 152 by the receiving NCCs 112, 114, respectively. The NCCs 112, 114 are aware of the order of the alltoall payload of the aggregated message.
(33) Then, at final step 154 the NCCs 112, 114 disaggregate the aggregated message and scatter the data to the MPI processes according to the communicator contexts 118, 120, respectively. In above example, the NCC 114 in NIC 108 breaks the message into two parts, and scatters the first half to queue pair 134 (Qp 4) and the second half to queue pair 136 (Qp 5).
(34) It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof that are not in the prior art, which would occur to persons skilled in the art upon reading the foregoing description.