Patent classifications
G06F7/5312
Fast vector multiplication and accumulation circuit
A fast vector multiplication and accumulation circuit is applied to an artificial neural network accelerator and configured to calculate an inner product of a multiplier vector and a multiplicand vector. A scheduler is configured to arrange a plurality of multiplicands of the multiplicand vector into a plurality of scheduled operands according to a plurality of multipliers of the multiplier vector, respectively. A self-accumulating adder is signally connected to the scheduler and includes a compressor, at least two delay elements and at least one shifter. The compressor is configured to add the scheduled operands to generate a plurality of compressed operands. The at least two delay elements are connected to the compressor. The shifter is configured to shift one of the compressed operands. An adder is signally connected to the output ports of the compressor so as to add the compressed operands to generate the inner product.
Digital circuit with compressed carry
Embodiments of the present disclosure pertain to digital circuits with compressed carries. In one embodiment, an adder circuit generates a sum and carry. The carry is compressed to reduce the number of bits required to represent the carry. In one embodiment, a multiplier circuit generates output product values. The output product values may be summed to produce a sum and carry. The carry may be compressed. In other embodiments, a multiplier circuit receives an input sum and compressed carry. The compressed input carry is decompressed and added to output product values and the input sum, and a resulting carry is compressed. The output of such a multiplier is another sum and compressed carry.
MULTIPLIER CIRCUIT
A multiplier circuit is described in which sub-products calculated in a first stage of a carry-save adder (CSA) network are output early, processed by applying a processing function, and re-injected into a subsequent stage of the CSA network to add the processed sub-products. This allows a CSA network provided for multiplication operations to be reused for operations which require sub-products to be processed and added, such as floating-point dot product operations performed on floating-point values represented in bfloatl6 format.
Apparatus and Method of Fast Floating-Point Adder Tree for Neural Networks
A computing device to implement fast floating-point adder tree for the neural network applications is disclosed. The fast float-point adder tree comprises a data preparation module, a fast fixed-point Carry-Save Adder (CSA) tree, and a normalization module. The floating-point input data comprises a sign bit, exponent part and fraction part. The data preparation module aligns the fraction part of the input data and prepares the input data for subsequent processing. The fast adder uses a signed fixed-point CSA tree to quickly add a large number of fixed-point data into 2 output values and then uses a normal adder to add the 2 output values into one output value. The fast adder uses for a large number of operands is based on multiple levels of fast adders for a small number of operands. The output from the signed fixed-point Carry-Save Adder tree is converted to a selected floating-point format.
Digital circuit with compressed carry
Embodiments of the present disclosure pertain to digital circuits with compressed carries. In one embodiment, an adder circuit generates a sum and carry. The carry is compressed to reduce the number of bits required to represent the carry. In one embodiment, a multiplier circuit generates output product values. The output product values may be summed to produce a sum and carry. The carry may be compressed. In other embodiments, a multiplier circuit receives an input sum and compressed carry. The compressed input carry is decompressed and added to output product values and the input sum, and a resulting carry is compressed. The output of such a multiplier is another sum and compressed carry.
Digital Circuit With Compressed Carry
Embodiments of the present disclosure pertain to digital circuits with compressed carries. In one embodiment, an adder circuit generates a sum and carry. The carry is compressed to reduce the number of bits required to represent the carry. In one embodiment, a multiplier circuit generates output product values. The output product values may be summed to produce a sum and carry. The carry may be compressed. In other embodiments, a multiplier circuit receives an input sum and compressed carry. The compressed input carry is decompressed and added to output product values and the input sum, and a resulting carry is compressed. The output of such a multiplier is another sum and compressed carry.
FAST VECTOR MULTIPLICATION AND ACCUMULATION CIRCUIT
A fast vector multiplication and accumulation circuit is applied to an artificial neural network accelerator and configured to calculate an inner product of a multiplier vector and a multiplicand vector. A scheduler is configured to arrange a plurality of multiplicands of the multiplicand vector into a plurality of scheduled operands according to a plurality of multipliers of the multiplier vector, respectively. A self-accumulating adder is signally connected to the scheduler and includes a compressor, at least two delay elements and at least one shifter. The compressor is configured to add the scheduled operands to generate a plurality of compressed operands. The at least two delay elements are connected to the compressor. The shifter is configured to shift one of the compressed operands. An adder is signally connected to the output ports of the compressor so as to add the compressed operands to generate the inner product.
Programmable-Logic-Directed Multiplier Mapping
The present disclosure relates generally to techniques for enhancing multipliers implemented on an integrated circuit. In particular, by refactoring arithmetic implemented by a multiplier to perform multiplication, routing (e.g., wiring) used by the multiplier may be improved. As a result, the integrated circuit may benefit from increased efficiencies, reduced latency, and reduced resource consumption (e.g., wiring, area, and power) involved with implementing multiplication, which may improve machine learning implementations on the integrated circuit.
METHOD AND APPARATUS FOR PERFORMING FIELD PROGRAMMABLE GATE ARRAY PACKING WITH CONTINUOUS CARRY CHAINS
A method for designing a system on a target device includes identifying a length for a carry chain that is supported by predefined quanta of a resource on the target device. A plurality of logical adders is mapped onto a single logical adder implemented on the carry chain subject to the identified length to increase logic utilization in a design for the system.
METHOD AND APPARATUS FOR PERFORMING SYNTHESIS FOR FIELD PROGRAMMABLE GATE ARRAY EMBEDDED FEATURE PLACEMENT
A method for designing and configuring a system on a field programmable gate array (FPGA) is disclosed. A portion of the system that is implemented greater than a predetermined number of times is identified. A structural netlist that describes how to implement the portion of the system a plurality of times on the FPGA and that leverages a repetitive nature of implementing the portion is generated. The identifying and generating is performed prior to synthesizing and placing other portions of the system that are not implemented greater than the predetermined number of time. Synthesizing, placing, and routing the other portions of the system on the FPGA is performed in accordance with the structural netlist. The FPGA is configured with a configuration file that includes a design for the system that reflects the synthesizing, placing, and routing, wherein the configuring physically transforms resources on the FPGA to implement the system.