G06F7/499

CIRCUITRY AND METHOD
20230005209 · 2023-01-05 ·

Circuitry comprises ray tracing circuitry comprising a plurality of floating-point circuitries to perform floating-point processing operations to detect intersection between a virtual ray defined by a ray direction and a test region, the floating-point circuitries operating to a given precision to generate an output floating-point value comprising a significand and an exponent; in which at least some of the plurality of floating-point circuitries are configured to round using a predetermined directed rounding mode any denormal floating-point value generated by operation of that circuitry so as to output normal values, a denormal floating-point value being a floating-point value in which the significand comprises one or more leading zeroes.

System to perform unary functions using range-specific coefficient sets

A method comprising storing a plurality of entries, each entry of the plurality of entries associated with a portion of a range of input values, each entry of the plurality of entries comprising a set of coefficients defining a power series approximation; selecting first entry of the plurality of entries based on a determination that a floating point input value is within a portion of the range of input values that is associated with the first entry; and calculating an output value by evaluating the power series approximation defined by the set of coefficients of the first entry at the floating point input value.

ARITHMETIC DEVICE, METHOD, AND PROGRAM
20220382544 · 2022-12-01 · ·

A processor determines an exponent common to a plurality of numerical values, determines a mantissa for each of the plurality of numerical values based on the determined exponent, and performs four arithmetic operations using a sign, the determined exponent, and the determined mantissa.

Arithmetic processing apparatus, control method, and non-transitory computer-readable recording medium having stored therein control program
11514320 · 2022-11-29 · ·

An arithmetic processing apparatus includes: a first determiner that determines, when a given learning model is repeatedly learned, an offset amount for correcting a decimal point position of fixed-point number data used in the learning in accordance with a degree of progress of the learning; and a second determiner that determines, based on the offset amount, the decimal point position of the fixed-point number data to be used in the learning. This configuration avoids lowering of the accuracy of a learning result of a learning model.

Histogram-based per-layer data format selection for hardware implementation of deep neural network

A histogram-based method of selecting a fixed point number format for representing a set of values input to, or output from, a layer of a Deep Neural Network (DNN). The method comprises obtaining a histogram that represents an expected distribution of the set of values of the layer, each bin of the histogram is associated with a frequency value and a representative value in a floating point number format; quantising the representative values according to each of a plurality of potential fixed point number formats; estimating, for each of the plurality of potential fixed point number formats, the total quantisation error based on the frequency values of the histogram and a distance value for each bin that is based on the quantisation of the representative value for that bin; and selecting the fixed point number format associated with the smallest estimated total quantisation error as the optimum fixed point number format for representing the set of values of the layer.

METHOD AND APPARATUS WITH CALCULATION
20230058095 · 2023-02-23 · ·

A processor-implemented method includes: receiving a plurality of pieces of input data expressed as floating point; adjusting a bit-width of mantissa by performing masking on the mantissa of each piece of the input data based on a size of an exponent of each piece of the input data; and performing an operation between the input data with the adjusted bit-width.

COMPUTING APPARATUS AND METHOD FOR VECTOR INNER PRODUCT, AND INTEGRATED CIRCUIT CHIP
20220366006 · 2022-11-17 ·

The present disclosure relates to a computing apparatus, a method and an integrated circuit chip for a vector inner product, where the computing apparatus may be included in a combined processing apparatus. The combined processing apparatus may further include a general interconnection interface and other processing apparatus. The computing apparatus interacts with other processing apparatus to jointly complete a computing operation specified by a user. The combined processing apparatus may further include a storage apparatus, where the storage apparatus is respectively connected to the computing apparatus and other processing apparatus, and the storage apparatus is used for storing data of the computing apparatus and other processing apparatus.

Floating point to fixed point conversion using exponent offset
11588497 · 2023-02-21 · ·

A binary logic circuit converts a number in floating point format having an exponent E, an exponent bias B=2.sup.ew-1−1, and a significand comprising a mantissa M of mw bits into a fixed point format with an integer width of iw bits and a fractional width of fw bits. The circuit includes an offset unit configured to offset the exponent of the floating point number by an offset value equal to (iw−1−s.sub.y) to generate a shift value s.sub.v of sw bits given by s.sub.v=(B−E)+(iw−1−s.sub.y), the offset value being equal to a maximum amount by which the significand can be left-shifted before overflow occurs in the fixed point format; a right-shifter operable to receive a significand input comprising a formatted set of bits derived from the significand, the shifter being configured to right-shift the input by a number of bits equal to the value represented by k least significant bits of the shift value to generate an output result, where bitwidth[min(2.sup.ew-1−1, iw−1−s.sub.y)+min(2.sup.ew-1−2, fw)]≤k≤sw, where s.sub.y=1 for a signed floating point number and s.sub.y=0 for an unsigned floating point number.

Method and apparatus for generating fixed-point quantized neural network

A method of generating a fixed-point quantized neural network includes analyzing a statistical distribution for each channel of floating-point parameter values of feature maps and a kernel for each channel from data of a pre-trained floating-point neural network, determining a fixed-point expression of each of the parameters for each channel statistically covering a distribution range of the floating-point parameter values based on the statistical distribution for each channel, determining fractional lengths of a bias and a weight for each channel among the parameters of the fixed-point expression for each channel based on a result of performing a convolution operation, and generating a fixed-point quantized neural network in which the bias and the weight for each channel have the determined fractional lengths.

COMPUTE-IN-MEMORY MACRO DEVICE AND ELECTRONIC DEVICE

A compute-in-memory (CIM) macro device and an electronic device are proposed. The CIM macro device includes a CIM cell array including multiple CIM cells. First data is being divided into at least two bit groups including a first bit group which is the most significant bits of the first data and a second bit group which is the least significant bits of the first data, and the bit groups are respectively loaded in CIM cells of different columns of the CIM cell array. The electronic device includes at least one CIM macro and at least one processing circuit. The processing circuit is configured to receive and perform operation on parallel outputs respectively corresponding to the columns of the CIM cell array, where the parallel outputs include multiple correspondences, and where each of the correspondences includes most significant bits of an output activation and least significant bits of the output activation.