Patent classifications
G06F15/8092
Shared memory access for reconfigurable parallel processor using a plurality of memory ports each comprising an address calculation unit
Processors, systems and methods are provided for thread level parallel processing. A processor may comprise a plurality of processing elements (PEs) each having a plurality of arithmetic logic units (ALUs) that are configured to execute a same instruction in parallel threads and a plurality of memory ports (MPs) for the plurality of PEs to access a memory unit. Each of the plurality of MPs may comprise an address calculation unit configured to generate respective memory addresses for each thread to access a common area in the memory unit.
Private memory access for reconfigurable parallel processor using a plurality of memory ports each comprising an address calculation unit
Processors, systems and methods are provided for thread level parallel processing. A processor may comprise a plurality of processing elements (PEs) and a plurality of memory ports (MPs) for the plurality of PEs to access a memory unit. Each PE may have a plurality of arithmetic logic units (ALUs) that are configured to execute a same instruction in parallel threads. Each of the plurality of MPs may comprise an address calculation unit configured to generate respective memory addresses for each thread to access a different memory bank in the memory unit.
Circular reconfiguration for reconfigurable parallel processor using a plurality of memory ports coupled to a commonly accessible memory unit
Processors, systems and methods are provided for thread level parallel processing. A processor may comprise a plurality of reconfigurable units that may include a plurality of processing elements (PEs) and a plurality of memory ports (MPs) for the plurality of PEs to access a memory unit. Each of the plurality of reconfigurable units may comprise a configuration buffer and a reconfiguration counter. The processor may further comprise a sequencer coupled to the configuration buffer of each of the plurality of reconfigurable units and configured to distribute a plurality of configurations to the plurality of reconfigurable units for the plurality of PEs and the plurality of MPs to execute a sequence of instructions.
Reconfigurable parallel processing with a temporary data storage coupled to a plurality of processing elements (PES) to store a PE execution result to be used by a PE during a next PE configuration
Processors, systems and methods are provided for thread level parallel processing. A processor may comprise a plurality of processing elements (PEs) that each may comprise a configuration buffer, a sequencer coupled to the configuration buffer of each of the plurality of PEs and configured to distribute one or more PE configurations to the plurality of PEs, and a gasket memory coupled to the plurality of PEs and being configured to store at least one PE execution result to be used by at least one of the plurality of PEs during a next PE configuration.
Apparatuses, methods, and systems for vector processor architecture having an array of identical circuit blocks
Systems, methods, and apparatuses relating to vector processor architecture having an array of identical circuit blocks are described. In one embodiment, a processor includes a single centralized circuit comprising an instruction decoder and a controller; and a plurality of circuit slices that each comprise an arithmetic logic unit, a multiplier, a register file, a local memory, and a same plurality of logic circuits and a packed data datapath in between, wherein each circuit slice includes a physical port that provides a unique identification value that identifies a circuit slice from the other circuit slices, and the controller is to broadcast a same configuration value to the plurality of circuit slices to cause a first circuit slice to enable a first logic circuit and enable a second logic circuit of the first circuit slice based on its unique identification value and the configuration value, and cause a second circuit slice to enable a same, first logic circuit and disable a same, second logic circuit of the second circuit slice based on its unique identification value and the configuration value.
Data processing
Data processing apparatus comprises processing circuitry to apply processing operations to one or more data items of a linear array comprising a plurality, n, of data items at respective positions in the linear array, the processing circuitry being configured to access an array of n×n storage locations, where n is an integer greater than one, the processing circuitry comprising: instruction decoder circuitry to decode program instructions; and instruction processing circuitry to execute instructions decoded by the instruction decoder circuitry; wherein the instruction decoder circuitry is responsive to an array access instruction, to control the instruction processing circuitry to access, as a linear array, a set of n storage locations arranged in an array direction selected, under control of the array access instruction, from a set of candidate array directions comprising at least a first array direction and a second array direction different to the first array direction.
SHARED SCRATCHPAD MEMORY WITH PARALLEL LOAD-STORE
Methods, systems, and apparatus, including computer-readable media, are described for a hardware circuit configured to implement a neural network. The circuit includes a first memory, respective first and second processor cores, and a shared memory. The first memory provides data for performing computations to generate an output for a neural network layer. Each of the first and second cores include a vector memory for storing vector values derived from the data provided by the first memory. The shared memory is disposed generally intermediate the first memory and at least one core and includes: i) a direct memory access (DMA) data path configured to route data between the shared memory and the respective vector memories of the first and second cores and ii) a load-store data path configured to route data between the shared memory and respective vector registers of the first and second cores.
Vector processing unit
A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
Parallel computational architecture with reconfigurable core-level and vector-level parallelism
Neural network processing hardware using parallel computational architectures with reconfigurable core-level and vector-level parallelism is provided. In various embodiments, a neural network model memory is adapted to store a neural network model comprising a plurality of layers. Each layer has at least one dimension and comprises a plurality of synaptic weights. A plurality of neural cores is provided. Each neural core includes a computation unit and an activation memory. The computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The computation unit has a plurality of vector units. The activation memory is adapted to store the input activations and the output activations. The system is adapted to partition the plurality of cores into a plurality of partitions based on dimensions of the layer and the vector units.
RECONFIGURABLE PARALLEL PROCESSING
Processors, systems and methods are provided for thread level parallel processing. A processor may comprise a plurality of processing elements (PEs) that each may comprise a configuration buffer, a sequencer coupled to the configuration buffer of each of the plurality of PEs and configured to distribute one or more PE configurations to the plurality of PEs, and a gasket memory coupled to the plurality of PEs and being configured to store at least one PE execution result to be used by at least one of the plurality of PEs during a next PE configuration.