Patent classifications
G06F7/4836
FLOATING POINT CHAINED MULTIPLY ACCUMULATE
Floating point chained multiply accumulation is performed using a multiplier to multiply a first floating point operand by a second floating point operand to generate an unrounded multiplication result. An adder then adds a third floating point operand to the unrounded multiplication result to generate an unrounded accumulation result. Rounding circuitry then applies both the rounding associated with the unrounded multiplication result and rounding associated with the unrounded accumulation result to generate a rounded accumulation result.
HARDWARE EFFICIENT ROUNDING
A binary logic circuit and method for rounding an unsigned normalised n-bit binary number to an m-bit binary number. A correction value of length of n bits and a pre-truncation value of length of n bits are determined. The correction value is determined by shifting the n-bit number by m bits. The pre-truncation value is determined based on at least the n-bit number, the correction value, a value for the most significant bit (MSB) of the n-bit number, and a rounding value having a 1 at the n?m.sup.th bit position and a 0 at all other bits. The rounded m-bit number is then obtained by truncating the n?m least significant bits (LSB) of the pre-truncation value.
Hierarchical mantissa bit length selection for hardware implementation of deep neural network
Hierarchical methods for selecting fixed point number formats with reduced mantissa bit lengths for representing values input to, and/or output, from, the layers of a DNN. The methods begin with one or more initial fixed point number formats for each layer. The layers are divided into subsets of layers and the mantissa bit lengths of the fixed point number formats are iteratively reduced from the initial fixed point number formats on a per subset basis. If a reduction causes the output error of the DNN to exceed an error threshold, then the reduction is discarded, and no more reductions are made to the layers of the subset. Otherwise a further reduction is made to the fixed point number formats for the layers in that subset. Once no further reductions can be made to any of the subsets the method is repeated for continually increasing numbers of subsets until a predetermined number of layers per subset is achieved.
METHODS AND SYSTEMS FOR ADDITION, MULTIPLICATION, SUBTRACTION, AND DIVISION OF RATIONAL NUMBERS ENCODED IN THE DOMAIN OF FAREY RATIONALS FOR MPC SYSTEMS
Disclosed are methods and systems to provide encoding and decoding using of p-adic arithmetic, and inverse p-adic arithmetic, in the domain of Farey rationals that is induced by a ring isomorphism, such that said encoded integers preserve inverses, and additive and multiplicative homomorphic properties. This encoding permits MPC systems to perform arithmetic more efficiently and accurately, particularly for division.
Rounding hexadecimal floating point numbers using binary incrementors
Rounding hexadecimal floating point numbers using binary incrementors, including: incrementing, by a first incrementor, a first subset of bits of an operand comprising a binary hexadecimal floating point operand; incrementing, by a second incrementor, a second subset of bits of the operand; generate an intermediate result based on a carryout of the second incrementor; and generate an incremented result based on a carryout of the first incrementor and one or more of: a first bit of the intermediate result or the carryout of the second incrementor.
HIERARCHICAL MANTISSA BIT LENGTH SELECTION FOR HARDWARE IMPLEMENTATION OF DEEP NEURAL NETWORK
Hierarchical methods for selecting fixed point number formats with reduced mantissa bit lengths for representing values input to, and/or output, from, the layers of a DNN. The methods begin with one or more initial fixed point number formats for each layer. The layers are divided into subsets of layers and the mantissa bit lengths of the fixed point number formats are iteratively reduced from the initial fixed point number formats on a per subset basis. If a reduction causes the output error of the DNN to exceed an error threshold, then the reduction is discarded, and no more reductions are made to the layers of the subset. Otherwise a further reduction is made to the fixed point number formats for the layers in that subset. Once no further reductions can be made to any of the subsets the method is repeated for continually increasing numbers of subsets until a predetermined number of layers per subset is achieved.
Hierarchical Mantissa Bit Length Selection For Hardware Implementation Of Deep Neural Network
Hierarchical methods for selecting fixed point number formats with reduced mantissa bit lengths for representing values input to, and/or output, from, the layers of a DNN. The methods begin with one or more initial fixed point number formats for each layer. The layers are divided into subsets of layers and the mantissa bit lengths of the fixed point number formats are iteratively reduced from the initial fixed point number formats on a per subset basis. If a reduction causes the output error of the DNN to exceed an error threshold, then the reduction is discarded, and no more reductions are made to the layers of the subset. Otherwise a further reduction is made to the fixed point number formats for the layers in that subset. Once no further reductions can be made to any of the subsets the method is repeated for continually increasing numbers of subsets until a predetermined number of layers per subset is achieved.
Hierarchical Mantissa Bit Length Selection For Hardware Implementation Of Deep Neural Network
Hierarchical methods for selecting fixed point number formats with reduced mantissa bit lengths for representing values input to, and/or output, from, the layers of a DNN. The methods begin with one or more initial fixed point number formats for each layer. The layers are divided into subsets of layers and the mantissa bit lengths of the fixed point number formats are iteratively reduced from the initial fixed point number formats on a per subset basis. If a reduction causes the output error of the DNN to exceed an error threshold, then the reduction is discarded, and no more reductions are made to the layers of the subset. Otherwise a further reduction is made to the fixed point number formats for the layers in that subset. Once no further reductions can be made to any of the subsets the method is repeated for continually increasing numbers of subsets until a predetermined number of layers per subset is achieved.