Patent classifications
G06F7/556
Communication system and method for achieving high data rates using modified nearly-equiangular tight frame (NETF) matrices
A method includes generating a set of symbols based on an incoming data vector. The set of symbols includes K symbols, K being a positive integer. A first transformation matrix including an equiangular tight frame (ETF) transformation or a nearly equiangular tight frame (NETF) transformation is generated, having dimensions N×K, where N is a positive integer and has a value less than K. A second transformation matrix having dimensions K×K is generated based on the first transformation matrix. A third transformation matrix having dimensions K×K is generated by performing a series of unitary transformations on the second transformation matrix. A first data vector is transformed into a second data vector having a length N based on the third transformation matrix and the set of symbols. A signal representing the second data vector is sent to a transmitter for transmission of a signal representing the second data vector to a receiver.
Machine Learning Computer
A computer comprising a plurality of processing units, each processing unit having an execution unit and access to computer memory which stores code executable by the execution unit and input values of an input vector to be processed by the code, the code, when executed, configured to access the computer memory to obtain multiple pairs of input values of the input vector, determine a maximum or corrected maximum input value of each pair as a maximum result element, determine and store in a computer memory a maximum or corrected maximum result of each pair of maximum result elements as an approximation to the natural log of the sum of the exponents of the input values and access the computer memory to obtain each input value and apply it to the maximum or corrected maximum result to generate each output value of a Softmax output vector.
Machine Learning Computer
A computer comprising a plurality of processing units, each processing unit having an execution unit and access to computer memory which stores code executable by the execution unit and input values of an input vector to be processed by the code, the code, when executed, configured to access the computer memory to obtain multiple pairs of input values of the input vector, determine a maximum or corrected maximum input value of each pair as a maximum result element, determine and store in a computer memory a maximum or corrected maximum result of each pair of maximum result elements as an approximation to the natural log of the sum of the exponents of the input values and access the computer memory to obtain each input value and apply it to the maximum or corrected maximum result to generate each output value of a Softmax output vector.
Optimization of neural networks using hardware calculation efficiency and adjustment factors
In one embodiment, a method includes receiving a request for an operation to be performed; determining that the operation is associated with a machine-learning algorithm, and in response, route the operation to a computing circuit; performing, at the computing circuit, the operation, including: determining a linear domain product of a first log-domain number and a second log-domain number associated with the operation based on a summation of the first log-domain number and the second log-domain number and output a third log-domain number approximating the linear domain product of the first log-domain number and the second log-domain number; converting the third log-domain number to a first linear-domain number; summing the first linear-domain number and a second linear-domain number associated with the operation, and output a third linear-domain number as the summed result.
Optimization of neural networks using hardware calculation efficiency and adjustment factors
In one embodiment, a method includes receiving a request for an operation to be performed; determining that the operation is associated with a machine-learning algorithm, and in response, route the operation to a computing circuit; performing, at the computing circuit, the operation, including: determining a linear domain product of a first log-domain number and a second log-domain number associated with the operation based on a summation of the first log-domain number and the second log-domain number and output a third log-domain number approximating the linear domain product of the first log-domain number and the second log-domain number; converting the third log-domain number to a first linear-domain number; summing the first linear-domain number and a second linear-domain number associated with the operation, and output a third linear-domain number as the summed result.
Random Symbol Generation System and Method
A method (200) for evaluating random symbols includes generating random symbols from a chaotic system (210), evaluating an output of the chaotic system by a raw entropy estimator and by a Lyapunov exponent estimator (220) and verifying a plurality of parameters based on the on the outputs of the raw entropy estimator and the Lyapunov exponent estimators (230).
Random Symbol Generation System and Method
A method (200) for evaluating random symbols includes generating random symbols from a chaotic system (210), evaluating an output of the chaotic system by a raw entropy estimator and by a Lyapunov exponent estimator (220) and verifying a plurality of parameters based on the on the outputs of the raw entropy estimator and the Lyapunov exponent estimators (230).
APPROXIMATION OF MATRICES FOR MATRIX MULTIPLY OPERATIONS
A processing device is provided which comprises memory configured to store data and a processor configured to receive a portion of data of a first matrix comprising a first plurality of elements and receive a portion of data of a second matrix comprising a second plurality of elements. The processor is also configured to determine values for a third matrix by dropping a number of products from products of pairs of elements of the first and second matrices based on approximating the products of the pairs of elements as a sum of the exponents of the pairs of elements and performing matrix multiplication on remaining products of the pairs of elements of the first and second matrices.
APPROXIMATION OF MATRICES FOR MATRIX MULTIPLY OPERATIONS
A processing device is provided which comprises memory configured to store data and a processor configured to receive a portion of data of a first matrix comprising a first plurality of elements and receive a portion of data of a second matrix comprising a second plurality of elements. The processor is also configured to determine values for a third matrix by dropping a number of products from products of pairs of elements of the first and second matrices based on approximating the products of the pairs of elements as a sum of the exponents of the pairs of elements and performing matrix multiplication on remaining products of the pairs of elements of the first and second matrices.
FINAL EXPONENTIATION CALCULATION DEVICE, PAIRING OPERATION DEVICE, CRYPTOGRAPHIC PROCESSING DEVICE, FINAL EXPONENTIATION CALCULATION METHOD, AND COMPUTER READABLE MEDIUM
In a final exponentiation calculation device, a decomposition unit (221) decomposes an exponent part into an easy part and a hard part, using a cyclotomic polynomial, in a final exponentiation calculation part of a pairing operation on an elliptic curve represented by a polynomial r(u), a polynomial q(u), a polynomial t(u), an embedding degree k, and a parameter u. A transformation unit (222) transforms the hard part obtained by decomposition by the decomposition unit (221) into a linear sum of the polynomial q(u). An exponentiation calculation unit (23) calculates the final exponentiation calculation part, using the easy part and the hard part transformed into the linear sum of the polynomial q(u).