Patent classifications
G06F5/012
METHOD FOR PROCESSING DATA SETS CONTAINING AT LEAST ONE TIME SERIES, DEVICE FOR CARRYING OUT, VEHICLE AND COMPUTER PROGRAM
A method for processing data sets having at least one time series. These are measurement values of sensors that are sensed at certain times. The data of the measured values and the respective times at which the data were sensed are stored as data elements of the time series. The method compresses the data set by rounding the sensed data with subsequent decimation of the data elements of the data set which are contained in the time series.
FLOATING-POINT LOGARITHMIC NUMBER SYSTEM SCALING SYSTEM FOR MACHINE LEARNING
An integrated circuit includes a hardware inexact floating-point logarithmic number system (FPLNS) multiplier. The integrated circuit access registers containing a first floating-point binary value and its first logarithmic binary value and a second floating-point binary value and its second logarithmic binary value, each being in an FPLNS data format. The FPLNS multiplier configured to multiply the first and second floating-point binary values by adding the first logarithmic binary value to the second logarithmic binary value to form a first logarithmic sum, shifting a bias constant by a number of bits of the mantissa of the first floating-point binary value to form a first shifted bias value, subtracting a correction factor from the first shifted bias value to form a first corrected bias value, and subtracting the first corrected bias value from the first logarithmic sum to form a first result.
Method of Performing Hardware Efficient Unbiased Rounding of a Number
A method and hardware for performing hardware efficient unbiased rounding of a number includes receiving the number in a binary format having a first portion and a second portion. The first portion comprises bits of the number above a rounding point and the second portion comprises bits of the number after the rounding point. The method includes adding a first amount to the number to obtain a first value. Further the method comprises determining if the bit above the rounding point for a controlling value is ‘0’ bit or a ‘1’ bit. The controlling value is either the received number in the binary format or the first value. The method further includes adding a second amount to ‘b+1’ LSBs of the first value to obtain a second value if the bit above the rounding point for the controlling value is a ‘0’ bit and rounding the number by truncating the last b bits of the second value or the last b bits of the first value based on the determination.
FLOATING-POINT COMPUTATION APPARATUS AND METHOD USING COMPUTING-IN-MEMORY
Disclosed herein are a floating-point computation apparatus and method using Computing-in-Memory (CIM). The floating-point computation apparatus performs a multiply-and-accumulation operation on pieces of input neuron data represented in a floating-point format, and includes a data preprocessing unit configured to separate and extract an exponent and a mantissa from each of the pieces of input neuron data, an exponent processing unit configured to perform CIM on input neuron exponents, which are exponents separated and extracted from the input neuron data, and a mantissa processing unit configured to perform a high-speed computation on input neuron mantissas, separated and extracted from the input neuron data, wherein the exponent processing unit determines a mantissa shift size for a mantissa computation and transfers the mantissa shift size to the mantissa processing unit, and the mantissa processing unit normalizes a result of the mantissa computation and transfers a normalization value to the exponent processing unit.
Implementing neural networks in fixed point arithmetic computing systems
Methods, systems, and computer storage media for implementing neural networks in fixed point arithmetic computing systems. In one aspect, a method includes the actions of receiving a request to process a neural network using a processing system that performs neural network computations using fixed point arithmetic; for each node of each layer of the neural network, determining a respective scaling value for the node from the respective set of floating point weight values for the node; and converting each floating point weight value of the node into a corresponding fixed point weight value using the respective scaling value for the node to generate a set of fixed point weight values for the node; and providing the sets of fixed point floating point weight values for the nodes to the processing system for use in processing inputs using the neural network.
FLOATING POINT TO FIXED POINT CONVERSION
A binary logic circuit converts a number in floating point format having an exponent E of ew bits, an exponent bias B given by B = 2.sup.ew-1 - 1, and a significand comprising a mantissa M of mw bits into a fixed point format with an integer width of iw bits and a fractional width of fw bits. The circuit includes a shifter operable to receive a significand input comprising a contiguous set of the most significant bits of the significand and configured to left-shift the significand input by a number of bits equal to the value represented by k least significant bits of the exponent to generate a shifter output, wherein min{(ew - 1), bitwidth(iw - 2 - s.sub.y)} ≤ k ≤ (ew - 1) where s.sub.y = 1 for a signed floating point number and s.sub.y = 0 for an unsigned floating point number, and a multiplexer coupled to the shifter and configured to: receive an input comprising a contiguous set of bits of the shifter output; and output the input if the most significant bit of the exponent is equal to one.
Constraining function approximation hardware integrated with fixed-point to floating-point conversion
A method of constraining data represented in a deep neural network is described. The method includes determining an initial shifting specified to convert a fixed-point input value to a floating-point output value. The method also includes determining an additional shifting specified to constrain a dynamic range during converting of the fixed-point input value to the floating-point output value. The method further includes performing both the initial shifting and the additional shifting together to form a dynamic, range constrained, normalized floating-point output value.
Method and apparatus for controlling range of representable numbers
Provided are a method of controlling a range of representable numbers includes receiving a floating point value represented by an exponent and a mantissa, each represented by a predetermined numbers of bits, determining a bit configuration of the exponent and the mantissa of the floating point value based on a value of a most significant bit of the exponent of the floating point value, and determining a constant required for calculation of the floating point value according to the determined bit configuration of the exponent, and an apparatus for providing such a method.
Shift significand of decimal floating point data
A decimal floating point finite number in a decimal floating point format is composed from the number in a different format. A decimal floating point format includes fields to hold information relating to the sign, exponent and significand of the decimal floating point finite number. Other decimal floating point data, including infinities and NaNs (not a number), are also composed. Decimal floating point data are also decomposed from the decimal floating point format to a different format. For composition and decomposition, one or more instructions may be employed, including a shift significand instruction.
Scaling half-precision floating point tensors for training deep neural networks
One embodiment provides for a machine-learning accelerator device a multiprocessor to execute parallel threads of an instruction stream, the multiprocessor including a compute unit, the compute unit including a set of functional units, each functional unit to execute at least one of the parallel threads of the instruction stream. The compute unit includes compute logic configured to execute a single instruction to scale an input tensor associated with a layer of a neural network according to a scale factor, the input tensor stored in a floating-point data type, the compute logic to scale the input tensor to enable a data distribution of data of the input tensor to be represented by a 16-bit floating point data type.