G06F13/366

PRIORITIZED ARBITRATION USING FIXED PRIORITY ARBITER
20200104271 · 2020-04-02 ·

An arbiter may include a plurality of cells, mapping logic, a fixed priority arbiter, and unmapping logic. Each cell may be associated with a corresponding client and configured to store a priority for the corresponding client. The mapping logic may be connected to the plurality of cells to order requests received from the clients according to the priorities stored in the cells. The fixed priority arbiter may receive the ordered requests and generate a grant for a winning request of the requests. The unmapping logic may use the stored priorities to yield the grant back to the winning client that sent the winning request.

PRIORITIZED ARBITRATION USING FIXED PRIORITY ARBITER
20200104271 · 2020-04-02 ·

An arbiter may include a plurality of cells, mapping logic, a fixed priority arbiter, and unmapping logic. Each cell may be associated with a corresponding client and configured to store a priority for the corresponding client. The mapping logic may be connected to the plurality of cells to order requests received from the clients according to the priorities stored in the cells. The fixed priority arbiter may receive the ordered requests and generate a grant for a winning request of the requests. The unmapping logic may use the stored priorities to yield the grant back to the winning client that sent the winning request.

Methods and apparatus for performing partial reconfiguration in a pipeline-based network topology
10606779 · 2020-03-31 · ·

A programmable integrated circuit that can support partial reconfiguration is provided. The programmable integrated circuit may include multiple processing nodes that serve as accelerator blocks for an associated host processor that is communicating with the integrated circuit. The processing nodes may be connected in a hybrid shared-pipelined topology. Each pipeline stage in the hybrid architecture may include a bus switch and at least two shared processing nodes connected to the output of the bus switch. The bus switched may be configured to route an incoming packet to a selected one of the two processing nodes in that pipeline stage or may only route the incoming packet to the active node if the other node is undergoing partial reconfiguration. Configured in this way, the hybrid topology supports partial reconfiguration of the processing nodes without disrupting or limiting the operating frequency of the overall network.

Methods and apparatus for performing partial reconfiguration in a pipeline-based network topology
10606779 · 2020-03-31 · ·

A programmable integrated circuit that can support partial reconfiguration is provided. The programmable integrated circuit may include multiple processing nodes that serve as accelerator blocks for an associated host processor that is communicating with the integrated circuit. The processing nodes may be connected in a hybrid shared-pipelined topology. Each pipeline stage in the hybrid architecture may include a bus switch and at least two shared processing nodes connected to the output of the bus switch. The bus switched may be configured to route an incoming packet to a selected one of the two processing nodes in that pipeline stage or may only route the incoming packet to the active node if the other node is undergoing partial reconfiguration. Configured in this way, the hybrid topology supports partial reconfiguration of the processing nodes without disrupting or limiting the operating frequency of the overall network.

Procedures for implementing source based routing within an interconnect fabric on a system on chip

Optimizing transaction traffic on a System on a Chip (SoC) by using procedures such as expanding transactions and consolidating responses at nodes of an interconnect fabric for broadcasts, multi-casts, any-casts, source based routing type transactions, intra-streaming two or more transactions over a stream defined by a paired virtual channel-transaction class, trunking physical resources sharing common logical identifier, and using hashing to select among multiple physical resources sharing a common logical identifier.

Procedures for implementing source based routing within an interconnect fabric on a system on chip

Optimizing transaction traffic on a System on a Chip (SoC) by using procedures such as expanding transactions and consolidating responses at nodes of an interconnect fabric for broadcasts, multi-casts, any-casts, source based routing type transactions, intra-streaming two or more transactions over a stream defined by a paired virtual channel-transaction class, trunking physical resources sharing common logical identifier, and using hashing to select among multiple physical resources sharing a common logical identifier.

Topology-aware parallel reduction in an accelerator

A topology-aware parallel reduction method, system, and recording medium including obtaining the GPU connection topology of each of the plurality of GPUs as a connection tree, transforming the connection tree into a three layer tree comprising an intra-root tree, an intra-node tree, and an inter-node tree, evenly partitioning each entry on each of the GPUS, and selectively transferring data either in either direction or in each direction, simultaneously, along the evenly partitioned three layer tree using a full-duplex configuration of a PCIe bandwidth.

Topology-aware parallel reduction in an accelerator

A topology-aware parallel reduction method, system, and recording medium including obtaining the GPU connection topology of each of the plurality of GPUs as a connection tree, transforming the connection tree into a three layer tree comprising an intra-root tree, an intra-node tree, and an inter-node tree, evenly partitioning each entry on each of the GPUS, and selectively transferring data either in either direction or in each direction, simultaneously, along the evenly partitioned three layer tree using a full-duplex configuration of a PCIe bandwidth.

Devices and methods for prioritizing transmission of events on serial communication links

The present disclosure relates generally to serial communication links and, more specifically, to events communicated on serial communication links and the timing of those events. The events may be communicated according to a prioritization process.

Devices and methods for prioritizing transmission of events on serial communication links

The present disclosure relates generally to serial communication links and, more specifically, to events communicated on serial communication links and the timing of those events. The events may be communicated according to a prioritization process.