Patent classifications
G06F17/16
METHOD AND DEVICE FOR CODE-BASED GENERATION OF A KEY PAIR FOR ASYMMETRIC CRYPTOGRAPHY
According to various embodiments, a method for code-based generation of a key pair for asymmetric cryptography is described including generating a private key defining a linear code, determining a parity check or generator matrix for the linear code, blinding a sub-matrix of the parity check or generator matrix, generating a blinded inverse matrix by inverting the blinded sub-matrix or by inverting a quadratic matrix contained in the blinded sub-matrix, de-blinding the blinded inverse matrix to generate an inverse matrix and generating a public key for the private key using the inverse matrix.
METHOD AND DEVICE FOR CODE-BASED GENERATION OF A KEY PAIR FOR ASYMMETRIC CRYPTOGRAPHY
According to various embodiments, a method for code-based generation of a key pair for asymmetric cryptography is described including generating a private key defining a linear code, determining a parity check or generator matrix for the linear code, blinding a sub-matrix of the parity check or generator matrix, generating a blinded inverse matrix by inverting the blinded sub-matrix or by inverting a quadratic matrix contained in the blinded sub-matrix, de-blinding the blinded inverse matrix to generate an inverse matrix and generating a public key for the private key using the inverse matrix.
Optimizer based prunner for neural networks
A neural network pruning system can sparsely prune neural network models using an optimizer based approach that is agnostic to the model architecture being pruned. The neural network pruning system can prune by operating on the parameter vector of the full model and the gradient vector of the loss function with respect to the model parameters. The neural network pruning system can iteratively update parameters based on the gradients, while zeroing out as many parameters as possible based a preconfigured penalty.
Optimizer based prunner for neural networks
A neural network pruning system can sparsely prune neural network models using an optimizer based approach that is agnostic to the model architecture being pruned. The neural network pruning system can prune by operating on the parameter vector of the full model and the gradient vector of the loss function with respect to the model parameters. The neural network pruning system can iteratively update parameters based on the gradients, while zeroing out as many parameters as possible based a preconfigured penalty.
Method and system for convolution
Method and system relating generally to convolution is disclosed. In such a method, an image patch is selected from input data for a first channel of a plurality of input channels of an input layer. The selected image patch is transformed to obtain a transformed image patch. The transformed image patch is stored. Stored is a plurality of predetermined transformed filter kernels. A stored transformed filter kernel of the plurality of stored predetermined transformed filter kernels is element-wise multiplied by multipliers with the stored transformed image patch for a second channel of the plurality of input channels different from the first channel to obtain a product. The product is inverse transformed to obtain a filtered patch for the image patch.
Method and system for convolution
Method and system relating generally to convolution is disclosed. In such a method, an image patch is selected from input data for a first channel of a plurality of input channels of an input layer. The selected image patch is transformed to obtain a transformed image patch. The transformed image patch is stored. Stored is a plurality of predetermined transformed filter kernels. A stored transformed filter kernel of the plurality of stored predetermined transformed filter kernels is element-wise multiplied by multipliers with the stored transformed image patch for a second channel of the plurality of input channels different from the first channel to obtain a product. The product is inverse transformed to obtain a filtered patch for the image patch.
Quantum modulation-based data compression
Data compression includes: inputting data comprising a vector that requires a first amount of memory; compressing the vector into a compressed representation while preserving information content of the vector, including: encoding, using one or more non-quantum processors, at least a portion of the vector to implement a quantum gate matrix; and modulating a reference vector using the quantum gate matrix to generate the compressed representation, wherein the compressed representation requires a second amount of memory that is less than the first amount of memory; and outputting the compressed representation to be displayed, stored, and/or further processed.
Quantum modulation-based data compression
Data compression includes: inputting data comprising a vector that requires a first amount of memory; compressing the vector into a compressed representation while preserving information content of the vector, including: encoding, using one or more non-quantum processors, at least a portion of the vector to implement a quantum gate matrix; and modulating a reference vector using the quantum gate matrix to generate the compressed representation, wherein the compressed representation requires a second amount of memory that is less than the first amount of memory; and outputting the compressed representation to be displayed, stored, and/or further processed.
Reinforcement learning using a relational network for generating data encoding relationships between entities in an environment
A neural network system is proposed, including an input network for extracting, from state data, respective entity data for each a plurality of entities which are present, or at least potentially present, in the environment. The entity data describes the entity. The neural network contains a relational network for parsing this data, which includes one or more attention blocks which may be stacked to perform successive actions on the entity data. The attention blocks each include a respective transform network for each of the entities. The transform network for each entity is able to transform data which the transform network receives for the entity into modified entity data for the entity, based on data for a plurality of the other entities. An output network is arranged to receive data output by the relational network, and use the received data to select a respective action.
Reinforcement learning using a relational network for generating data encoding relationships between entities in an environment
A neural network system is proposed, including an input network for extracting, from state data, respective entity data for each a plurality of entities which are present, or at least potentially present, in the environment. The entity data describes the entity. The neural network contains a relational network for parsing this data, which includes one or more attention blocks which may be stacked to perform successive actions on the entity data. The attention blocks each include a respective transform network for each of the entities. The transform network for each entity is able to transform data which the transform network receives for the entity into modified entity data for the entity, based on data for a plurality of the other entities. An output network is arranged to receive data output by the relational network, and use the received data to select a respective action.