Patent classifications
G06F2207/3892
Memory-Size- and Bandwidth-Efficient Method for Feeding Systolic Array Matrix Multipliers
Matrix multiplication systolic array feed methods and related processing element (PE) microarchitectures for efficiently implementing systolic array generic matrix multiplier (SGEMM) in integrated circuits is provided. A systolic array architecture may include a processing element array, a column feeder array, and a row feeder array. A bandwidth of external memory may be reduced by a factor of reduction based on interleaving of the matrix data via a feeding pattern of the column feeder array and the row feeder array.
Dilated convolution using systolic array
In one example, a non-transitory computer readable medium stores instructions that, when executed by one or more hardware processors, cause the one or more hardware processors to: load a first weight data element of an array of weight data elements from a memory into a systolic array; select a subset of input data elements from the memory into the systolic array to perform first computations of a dilated convolution operation, the subset being selected based on a rate of the dilated convolution operation and coordinates of the weight data element within the array of weight data elements; and control the systolic array to perform the first computations based on the first weight data element and the subset to generate first output data elements of an output data array. An example of a compiler that generates the instructions is also provided.
Systolic array component combining multiple integer and floating-point data types
Systems and methods are provided to perform multiply-accumulate operations of multiple data types in a systolic array. One or more processing elements in the systolic array can include a shared multiplier and one or more adders. The shared multiplier can include a separate and/or a shared circuitry where the shared circuitry can perform at least a part of integer multiplication and at least a part of non-integer multiplication. The one or more adders can include one or more shared adders or one or more separate adders. The shared adder can include a separate and/or a shared circuitry where the shared circuitry can perform at least a part of integer addition and at least a part of non-integer addition.
DILATED CONVOLUTION USING SYSTOLIC ARRAY
In one example, a non-transitory computer readable medium stores instructions that, when executed by one or more hardware processors, cause the one or more hardware processors to: load a first weight data element of an array of weight data elements from a memory into a systolic array; select a subset of input data elements from the memory into the systolic array to perform first computations of a dilated convolution operation, the subset being selected based on a rate of the dilated convolution operation and coordinates of the weight data element within the array of weight data elements; and control the systolic array to perform the first computations based on the first weight data element and the subset to generate first output data elements of an output data array. An example of a compiler that generates the instructions is also provided.
Dilated convolution using systolic array
In one example, a non-transitory computer readable medium stores instructions that, when executed by one or more hardware processors, cause the one or more hardware processors to: load a first weight data element of an array of weight data elements from a memory into a systolic array; select a subset of input data elements from the memory into the systolic array to perform first computations of a dilated convolution operation, the subset being selected based on a rate of the dilated convolution operation and coordinates of the weight data element within the array of weight data elements; and control the systolic array to perform the first computations based on the first weight data element and the subset to generate first output data elements of an output data array. An example of a compiler that generates the instructions is also provided.
CALCULATION CIRCUIT AND DEEP LEARNING SYSTEM INCLUDING THE SAME
A calculation circuit may include a plurality of calculator groups constituting a systolic array composed of a plurality of rows and columns, wherein calculator groups included in each of the rows propagate a data value set through a single data path corresponding to the row in a data propagation direction, and propagate a plurality of drain value sets through a plurality of drain paths corresponding to the row in a drain propagation direction, and wherein a calculator group of the calculator groups included in each of the rows comprises a plurality of MAC (Multiplier-Accumulator) circuits, and the MAC circuits generate drain values respectively included in the drain value sets at the same time. The calculator groups included in each column may further propagate a weight value set corresponding to the column through a plurality of weight data paths corresponding to the column.
SYSTOLIC ARRAY INCLUDING FUSED MULTIPLY ACCUMULATE WITH EFFICIENT PRENORMALIZATION AND EXTENDED DYNAMIC RANGE
Systems and methods are provided to perform multiply-accumulate operations of normalized numbers in a systolic array to enable greater computational density, reduce the size of systolic arrays required to perform multiply-accumulate operations of normalized numbers, and/or enable higher throughput operation. The systolic array can be provided normalized numbers by a column of normalizers and can lack support for denormal numbers. Each normalizer can normalize the inputs to each processing element in the systolic array. The systolic array can include a multiplier and an adder. The multiplier can have multiple data paths that correspond to the data type of the input. The multiplier and adder can employ expanded exponent range to operate on normalized floating-point numbers and can lack support for denormal numbers.
SYSTOLIC ARRAY COMPONENT COMBINING MULTIPLE INTEGER AND FLOATING-POINT DATA TYPES
Systems and methods are provided to perform multiply-accumulate operations of multiple data types in a systolic array to increase clock speeds and/or reduce the size and quantity of systolic arrays required to perform multiply-accumulate operations of multiple data types. Each processing element in the systolic array can have a shared multiplier and one or more adders. The shared multiplier can have a separate and/or a shared circuitry where the shared circuitry is capable of performing at least a part of integer multiplication and at least a part of non-integer multiplication. The one or more adders can be a shared adder or separate adders. The shared adder can have a separate and a shared circuitry wherein the shared circuitry is capable of performing at least a part of integer addition and at least a part of non-integer addition.
DATA FORMAT TRANSFORM METHOD TO IMPROVE AI ENGINE MAC UTILIZATION
A data format converter rearranges data of an input image for input to a systolic array of multiply and accumulate processing elements. The image has a pixel height and a pixel width in a number of channels equal to a number of colors per pixel. The data format converter rearranges the data to a second, greater number of channels and inputs the second number of channels to one side of the systolic array. The second number of channels is less than or equal to the number of MAC PEs on the one side of the systolic array, and results in greater MAC PE utilization in the systolic array.
COMPRESSION-ENCODING SCHEDULED INPUTS FOR MATRIX COMPUTATIONS
A method of performing matrix computations includes receiving a compression-encoded matrix including a plurality of rows. Each row of the compression-encoded matrix has a plurality of defined element values and, for each such defined element value, a schedule tag indicating a schedule for using the defined element value in a scheduled matrix computation. The method further includes loading the plurality of rows of the compression-encoded matrix into a corresponding plurality of work memory banks, and providing decoded input data to a matrix computation module configured for performing the scheduled matrix computation. For each work memory bank, a next defined element value and a corresponding schedule tag are read. If the schedule tag meets a scheduling condition, the next defined element value is provided to the matrix computation module. Otherwise, a default element value is provided to the matrix computation module.