Patent classifications
G06F7/4833
MACHINE LEARNING WITH INPUT DATA DOMAIN TRANSFORMATION
Aspects described herein provide a method of processing data in a machine learning model, including: receiving first domain input data; transforming the first domain input data to second domain input data via a domain transformation function; providing the second domain input data to a first layer of a machine learning model; processing the second domain input data in the first layer of the machine learning model according to a set of layer weights; and outputting second domain output data from the first layer of the machine learning model.
MULTI-DIMENSIONAL LOGARITHMIC NUMBER SYSTEM PROCESSOR FOR INNER PRODUCT COMPUTATIONS
Methods and apparatus are described for the use of a multi-dimensional logarithmic number system for hardware acceleration of inner product computations. These methods and apparatus may be used for any device that requires low-power, low-area and fast inner product computational units, such as, for example, deep neural network training and inference calculations on edge devices. In a particular embodiment, neural network training is performed using multi-dimensional logarithmic data representation, to obtain a set of neural network weight coefficients. Given the determined weight coefficients, the second base is optimized for multi-dimensional logarithmic data representation. This optimal representation may be used to perform inference by the neural network.
PROCESSING WITH COMPACT ARITHMETIC PROCESSING ELEMENT
Low precision computers can be efficient at finding possible answers to search problems. However, sometimes the task demands finding better answers than a single low precision search. A computer system augments low precision computing with a small amount of high precision computing, to improve search quality with little additional computing.
NEURAL NETWORK ACCELERATOR USING LOGARITHMIC-BASED ARITHMETIC
Neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. Compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. However, performing addition on logarithmic format values is more complex. Conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. Instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum. The sum may then be converted back into the logarithmic format.
ASYNCHRONOUS ACCUMULATOR USING LOGARITHMIC-BASED ARITHMETIC
Neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. Compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. However, performing addition on logarithmic format values is more complex. Conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. Instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components using an asynchronous accumulator to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum. The sum may then be converted back into the logarithmic format.
INFERENCE ACCELERATOR USING LOGARITHMIC-BASED ARITHMETIC
Neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. Compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. However, performing addition on logarithmic format values is more complex. Conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. Instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components using an asynchronous accumulator to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum. The sum may then be converted back into the logarithmic format.
COMPUTER ARCHITECTURE FOR PERFORMING INVERSION USING CORRELITHM OBJECTS IN A CORRELITHM OBJECT PROCESSING SYSTEM
A system includes a memory and a node. The memory stores first and second log string correlithm objects. The node aligns the first and second log string correlithm objects such that a sub-string correlithm object from the first log string correlithm object associated with the logarithmic value of ten aligns with a sub-string correlithm object from the second log string correlithm object representing the logarithmic value of one. The node receives a first real-world numerical value and identifies a first sub-string correlithm object from the first log string correlithm object that corresponds to the first real-world numerical value. The node determines which sub-string correlithm object from the second log string correlithm object aligns in n-dimensional space with the first sub-string correlithm object from the first log string correlithm object, and outputs the determined sub-string correlithm object.
PROCESSING WITH COMPACT ARITHMETIC PROCESSING ELEMENT
Low precision computers can be efficient at finding possible answers to search problems. However, sometimes the task demands finding better answers than a single low precision search. A computer system augments low precision computing with a small amount of high precision computing, to improve search quality with little additional computing.
MECHANISM TO PERFORM SINGLE PRECISION FLOATING POINT EXTENDED MATH OPERATIONS
A processor to facilitate execution of a single-precision floating point operation on an operand is disclosed. The processor includes one or more execution units, each having a plurality of floating point units to execute one or more instructions to perform the single-precision floating point operation on the operand, including performing a floating point operation on an exponent component of the operand; and performing a floating point operation on a mantissa component of the operand, comprising dividing the mantissa component into a first sub-component and a second sub-component, determining a result of the floating point operation for the first sub-component and determining a result of the floating point operation for the second sub-component, and returning a result of the floating point operation.
LOGARITHMIC COMPUTATION TECHNOLOGY THAT USES DERIVATIVES TO REDUCE ERROR
Systems, apparatuses and methods may provide for technology that establishes a point of intersection based on a rate of change in a logarithmic function and generates a first linear estimation of the logarithmic function, wherein the first linear estimation has the point of intersection as an upper bound. Additionally, a second linear estimation of the logarithmic function may be generated, wherein the second linear estimation has the point of intersection as a lower bound. In one example, linear estimations of an antilogarithmic function may be similarly generated based on the rate of change of the antilogarithmic function.