G06F15/825

DATA INPUT/OUTPUT OPERATIONS DURING LOOP EXECUTION IN A RECONFIGURABLE COMPUTE FABRIC
20230050687 · 2023-02-16 ·

Various examples are directed to systems and methods in which a first flow controller of a first synchronous flow may receive an instruction to execute a first loop using the first synchronous flow. The first flow controller may determine a first iteration index for a first iteration of the first loop. The first flow controller may send, to a first compute element of the first synchronous flow, a first synchronous message to initiate a first synchronous flow thread for executing the first iteration of the first loop. The first synchronous message may comprise the iteration index. The first compute element may execute an input/output operation at a first location of a first compute element memory indicated by the first iteration index.

Data input/output operations during loop execution in a reconfigurable compute fabric
11709796 · 2023-07-25 · ·

Various examples are directed to systems and methods in which a first flow controller of a first synchronous flow may receive an instruction to execute a first loop using the first synchronous flow. The first flow controller may determine a first iteration index for a first iteration of the first loop. The first flow controller may send, to a first compute element of the first synchronous flow, a first synchronous message to initiate a first synchronous flow thread for executing the first iteration of the first loop. The first synchronous message may comprise the iteration index. The first compute element may execute an input/output operation at a first location of a first compute element memory indicated by the first iteration index.

Anti-congestion flow control for reconfigurable processors

A compiler configured to configure memory nodes with a ready-to-read credit counter and a write credit counter. The ready-to-read credit counter of a particular upstream memory node initialized with as many read credits as a buffer depth of a corresponding downstream memory node. The ready-to-read credit counter configured to decrement when a buffer data unit is written by the particular upstream memory node into the corresponding downstream memory node, and to increment when the particular upstream memory node receives from the corresponding downstream memory node a read ready token. The write credit counter of the particular upstream memory node initialized with one or more write credits and configured to decrement when the particular upstream memory node begins writing the buffer data unit into the corresponding downstream memory node, and to increment when the particular upstream memory node receives from the corresponding downstream memory node a write done token.

DETERMINING INTERNODAL PROCESSOR INTERCONNECTIONS IN A DATA-PARALLEL COMPUTING SYSTEM
20230229624 · 2023-07-20 · ·

A computer-implemented method comprises a topological communications configurator (TCC) of a computing system determining a connections-optimized configuration of processors among compute nodes of the system. Processors included in the compute nodes can execute compute workers of an application of the system and can form intranodal segments of an internodal interconnection topology communicatively coupling the intranodal segments. The intranodal segments can be interconnected via an internodal interconnections fabric. The TCC can determine the connections-optimized configuration based on internodal communications costs corresponding to communications routes among the internodal segments via the internodal interconnection fabric. The computing system can comprise the TCC and can comprise a data-parallel computing system.

PROGRAMMABLE DEVICE, HIERARCHICAL PARALLEL MACHINES, AND METHODS FOR PROVIDING STATE INFORMATION
20230214282 · 2023-07-06 ·

Programmable devices, hierarchical parallel machines and methods for providing state information are described. In one such programmable device, programmable elements are provided. The programmable elements are configured to implement one or more finite state machines. The programmable elements are configured to receive an N-digit input and provide a M-digit output as a function of the N-digit input. The M-digit output includes state information from less than all of the programmable elements. Other programmable devices, hierarchical parallel machines and methods are also disclosed.

MASKING FOR COARSE GRAINED RECONFIGURABLE ARCHITECTURE
20230058355 · 2023-02-23 ·

A reconfigurable compute fabric can include multiple nodes, and each node can include multiple tiles with respective processing and storage elements. The tiles can be arranged in an array or grid and can be communicatively coupled. In an example, a first node can include a tile cluster of N memory-compute tiles, and the N memory-compute tiles can be coupled using a first portion of a synchronous compute fabric. Operations performed by the respective processing and storage elements of the N memory-compute tiles can be selectively enabled or disabled based on information in a mask field of data propagated through the first portion of the synchronous compute fabric.

Programmable device, hierarchical parallel machines, and methods for providing state information
11604687 · 2023-03-14 · ·

Programmable devices, hierarchical parallel machines and methods for providing state information are described. In one such programmable device, programmable elements are provided. The programmable elements are configured to implement one or more finite state machines. The programmable elements are configured to receive an N-digit input and provide a M-digit output as a function of the N-digit input. The M-digit output includes state information from less than all of the programmable elements. Other programmable devices, hierarchical parallel machines and methods are also disclosed.

ARTIFICIAL INTELLIGENCE CHIP AND ARTIFICIAL INTELLIGENCE CHIP-BASED DATA PROCESSING METHOD
20230126978 · 2023-04-27 ·

Embodiments of the present disclosure provide an artificial intelligence (AI) chip and an AI chip-based data processing method. The AI chip includes: a data flow network for processing, on the basis of an AI algorithm, data to be processed. The data flow network includes: at least one calculation module, each configured to calculate, on the basis of one of at least one operation node corresponding to the AI algorithm, the data to be processed, and output a calculation result; and a next transfer module corresponding to each calculation module, connected to each calculation module, and configured to receive the calculation result output by each calculation module and process the calculation result, the data to be processed flowing in the data flow network according to a preset data flow direction.

Convolution engine for neural networks

A method and hardware system for mapping an input map of a convolutional neural network layer to an output map are disclosed. An array of processing elements are interconnected to support unidirectional dataflows through the array along at least three different spatial directions. Each processing element is adapted to combine values of dataflows along different spatial directions into a new value for at least one of the supported dataflows. For each data entry in the output map, a plurality of products from pairs of weights of a selected convolution kernel and selected data entries in the input map is provided and arranged into a plurality of associated partial sums. Products associated with a same partial sum are accumulated on the array and accumulated on the array into at least one data entry in the output map.

Compiler operations for tensor streaming processor

Embodiments are directed to a processor having a functional slice architecture. The processor is divided into tiles (or functional units) organized into a plurality of functional slices. The functional slices are configured to perform specific operations within the processor, which includes memory slices for storing operand data and arithmetic logic slices for performing operations on received operand data (e.g., vector processing, matrix manipulation). The processor includes a plurality of functional slices of a module type, each functional slice having a plurality of tiles. The processor further includes a plurality of data transport lanes for transporting data in a direction indicated in a corresponding instruction. The processor also includes a plurality of instruction queues, each instruction queue associated with a corresponding functional slice of the plurality of functional slices, wherein the instructions in the instruction queues comprise a functional slice specific operation code.