G06F7/49915

METHOD AND APPARATUS WITH CALCULATION
20230058095 · 2023-02-23 · ·

A processor-implemented method includes: receiving a plurality of pieces of input data expressed as floating point; adjusting a bit-width of mantissa by performing masking on the mantissa of each piece of the input data based on a size of an exponent of each piece of the input data; and performing an operation between the input data with the adjusted bit-width.

COMPUTING APPARATUS AND METHOD FOR VECTOR INNER PRODUCT, AND INTEGRATED CIRCUIT CHIP
20220366006 · 2022-11-17 ·

The present disclosure relates to a computing apparatus, a method and an integrated circuit chip for a vector inner product, where the computing apparatus may be included in a combined processing apparatus. The combined processing apparatus may further include a general interconnection interface and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The combined processing apparatus may further include a storage apparatus, where the storage apparatus is respectively connected to the computing apparatus and other processing apparatus, and the storage apparatus is used for storing data of the computing apparatus and other processing apparatus.

Floating point to fixed point conversion using exponent offset
11588497 · 2023-02-21 · ·

A binary logic circuit converts a number in floating point format having an exponent E, an exponent bias B=2.sup.ew-1−1, and a significand comprising a mantissa M of mw bits into a fixed point format with an integer width of iw bits and a fractional width of fw bits. The circuit includes an offset unit configured to offset the exponent of the floating point number by an offset value equal to (iw−1−s.sub.y) to generate a shift value s.sub.v of sw bits given by s.sub.v=(B−E)+(iw−1−s.sub.y), the offset value being equal to a maximum amount by which the significand can be left-shifted before overflow occurs in the fixed point format; a right-shifter operable to receive a significand input comprising a formatted set of bits derived from the significand, the shifter being configured to right-shift the input by a number of bits equal to the value represented by k least significant bits of the shift value to generate an output result, where bitwidth[min(2.sup.ew-1−1, iw−1−s.sub.y)+min(2.sup.ew-1−2, fw)]≤k≤sw, where s.sub.y=1 for a signed floating point number and s.sub.y=0 for an unsigned floating point number.

Condition code anticipator for hexadecimal floating point

An aspect includes executing, by a binary based floating-point arithmetic unit of a processor, a calculation having two or more operands in hexadecimal format based on a hexadecimal floating-point (HFP) instruction and providing a condition code for a calculation result of the calculation. The floating-point arithmetic unit includes a condition code anticipator circuit that is configured to provide the condition code to the processor prior to availability of the calculation result.

METHOD AND APPARATUS FOR IMPLIED BIT HANDLING IN FLOATING POINT MULTIPLICATION
20230085048 · 2023-03-16 ·

A method is provided that includes performing, by a processor in response to a floating point multiply instruction, multiplication of floating point numbers, wherein determination of values of implied bits of leading bit encoded mantissas of the floating point numbers is performed in parallel with multiplication of the encoded mantissas, and storing, by the processor, a result of the floating point multiply instruction in a storage location indicated by the floating point multiply instruction.

MULTIPLIER FOR FLOATING-POINT OPERATION, METHOD, INTEGRATED CIRCUIT CHIP, AND CALCULATION DEVICE
20230076931 · 2023-03-09 ·

The present disclosure relates to a multiplier, a method, an integrated circuit chip, and a computation apparatus for a floating-point computation. The computation apparatus may be included in a combined processing apparatus, which may also include a general interconnection interface and other processing apparatus. The computation apparatus interacts with other processing apparatus to jointly complete computation operations specified by the user. The combined processing apparatus may also include a storage apparatus, which is respectively connected to the computation apparatus and other processing apparatus and is used for storing data of the computation apparatus and other processing apparatus. Solutions of the present disclosure may be widely used in various floating-point data computations

CHIP, TERMINAL, FLOATING-POINT OPERATION CONTROL METHOD, AND RELATED APPARATUS

A floating-point operation control method, applied to a chip comprising a multiply accumulator, includes receiving a first selection signal, and controlling an operation circuit in the multiply accumulator corresponding to a floating-point operation mode indicated by the first selection signal. The floating-point operation mode supports a multiply accumulate operation of a floating-point number of a first bit width k.sub.1. The method further includes dividing fractional parts of first and second operands into m first and second suboperands of a second bit width k.sub.2. The second bit width k.sub.2=k.sub.1/m. The method further includes performing a multiplication operation based on the m first and second suboperands to obtain a fractional product, and determining a floating-point number sum based on the fractional product and a third operand.

SYSTEMS AND METHODS FOR ACCELERATING THE COMPUTATION OF THE EXPONENTIAL FUNCTION

Aspects of embodiments of the present disclosure relate to a field programmable gate array (FPGA) configured to implement an exponential function data path including: an input scaling stage including constant shifters and integer adders to scale a mantissa portion of an input floating-point value by approximately log.sub.2 e to compute a scaled mantissa value, where e is Euler's number; and an exponential stage including barrel shifters and an exponential lookup table to: extract an integer portion and a fractional portion from the scaled mantissa value based on the exponent portion of the input floating-point value; apply a bias shift to the integer portion to compute a result exponent portion of a result floating-point value; lookup a result mantissa portion of the result floating-point value in the exponential lookup table based on the fractional portion; and combine the result exponent portion and the result mantissa portion to generate the result floating-point value.

Multiple mode arithmetic circuit

A tile of an FPGA includes a multiple mode arithmetic circuit. The multiple mode arithmetic circuit is configured by control signals to operate in an integer mode, a floating-point mode, or both. In some example embodiments, multiple integer modes (e.g., unsigned, two's complement, and sign-magnitude) are selectable, multiple floating-point modes (e.g., 16-bit mantissa and 8-bit sign, 8-bit mantissa and 6-bit sign, and 6-bit mantissa and 6-bit sign) are supported, or any suitable combination thereof. The tile may also fuse a memory circuit with the arithmetic circuits. Connections directly between multiple instances of the tile are also available, allowing multiple tiles to be treated as larger memories or arithmetic circuits. By using these connections, referred to as cascade inputs and outputs, the input and output bandwidth of the arithmetic circuit is further increased.

FLOATING-POINT COMPUTATION APPARATUS AND METHOD USING COMPUTING-IN-MEMORY

Disclosed herein are a floating-point computation apparatus and method using Computing-in-Memory (CIM). The floating-point computation apparatus performs a multiply-and-accumulation operation on pieces of input neuron data represented in a floating-point format, and includes a data preprocessing unit configured to separate and extract an exponent and a mantissa from each of the pieces of input neuron data, an exponent processing unit configured to perform CIM on input neuron exponents, which are exponents separated and extracted from the input neuron data, and a mantissa processing unit configured to perform a high-speed computation on input neuron mantissas, separated and extracted from the input neuron data, wherein the exponent processing unit determines a mantissa shift size for a mantissa computation and transfers the mantissa shift size to the mantissa processing unit, and the mantissa processing unit normalizes a result of the mantissa computation and transfers a normalization value to the exponent processing unit.