Patent classifications
G06F7/76
CROSS-LINGUAL KNOWLEDGE TRANSFER LEARNING
Methods and systems for training a neural network include training language-specific teacher models using different respective source language datasets. A student model is trained, using the different respective source language datasets and soft labels generated by the language-specific teacher models, including shuffling the source language datasets and shuffling weights of language-dependent layers in language-specific parts of the student model. Weights of language-independent layers of the student model are copied to a language-independent layers of a target model to initialize language-independent layers of the target model. The target model is trained with a target language dataset.
CROSS-LINGUAL KNOWLEDGE TRANSFER LEARNING
Methods and systems for training a neural network include training language-specific teacher models using different respective source language datasets. A student model is trained, using the different respective source language datasets and soft labels generated by the language-specific teacher models, including shuffling the source language datasets and shuffling weights of language-dependent layers in language-specific parts of the student model. Weights of language-independent layers of the student model are copied to a language-independent layers of a target model to initialize language-independent layers of the target model. The target model is trained with a target language dataset.
REFORMATTING OF TENSORS TO PROVIDE SUB-TENSORS
A tensor of a first select dimension is reformatted to provide one or more sub-tensors of a second select dimension. The reformatting includes determining a number of sub-tensors to be used to represent the tensor. The reformatting further includes creating the number of sub-tensors, in which a sub-tensor is to start on a boundary of a memory unit. Data of the tensor is rearranged to fit within the number of sub-tensors.
SELECTING, FROM A POOL OF ITEMS, A SELECTED ITEM TO BE ASSOCIATED WITH A GIVEN REQUEST
An apparatus comprises interface circuitry to receive requests and selection circuitry responsive to the interface circuitry receiving a given request to select, from a pool of items, at least one selected item to be associated with the given request. The selection circuitry comprises a plurality of nodes arranged in a tree structure, each node being configured to select m output signals from n input signals provided to that node, wherein n>m. The apparatus comprises control circuitry configured to output, in dependence on a type of the given request, a suppression signal, and the tree structure comprises a gate node configured to suppress, in response to the suppression signal having a first value, selection from input signals received from a given portion of the tree structure to prevent a subset of the pool of items from being selected for at least one type of request.
INPUT SEQUENCE RE-ORDERING METHOD AND INPUT SEQUENCE RE-ORDERING UNIT WITH MULTI INPUT-PRECISION RECONFIGURABLE SCHEME AND PIPELINE SCHEME FOR COMPUTING-IN-MEMORY MACRO IN CONVOLUTIONAL NEURAL NETWORK APPLICATION
An input sequence re-ordering method with a multi input-precision reconfigurable scheme and a pipeline scheme for a computing-in-memory macro in a convolutional neural network application is configured to re-order a plurality of multi-bit input signals and includes performing a scanning step and a re-ordering step. The scanning step includes driving a scanner to scan one group of the multi-bit input signals to determine whether an initial value of a plurality of flag signals in one of a plurality of multi-bit section flags is changed to an inverted initial value according to a plurality of bit numbers of the one group of the multi-bit input signals. The re-ordering step includes driving a sorter to select a part of the one group of the multi-bit input signals corresponding to a plurality of the inverted initial values of the flag signals in the one of the multi-bit section flags.
INPUT SEQUENCE RE-ORDERING METHOD AND INPUT SEQUENCE RE-ORDERING UNIT WITH MULTI INPUT-PRECISION RECONFIGURABLE SCHEME AND PIPELINE SCHEME FOR COMPUTING-IN-MEMORY MACRO IN CONVOLUTIONAL NEURAL NETWORK APPLICATION
An input sequence re-ordering method with a multi input-precision reconfigurable scheme and a pipeline scheme for a computing-in-memory macro in a convolutional neural network application is configured to re-order a plurality of multi-bit input signals and includes performing a scanning step and a re-ordering step. The scanning step includes driving a scanner to scan one group of the multi-bit input signals to determine whether an initial value of a plurality of flag signals in one of a plurality of multi-bit section flags is changed to an inverted initial value according to a plurality of bit numbers of the one group of the multi-bit input signals. The re-ordering step includes driving a sorter to select a part of the one group of the multi-bit input signals corresponding to a plurality of the inverted initial values of the flag signals in the one of the multi-bit section flags.
Masked decoding of polynomials
Various embodiments relate to a method for masked decoding of a polynomial a using an arithmetic sharing a to perform a cryptographic operation in a data processing system using a modulus q, the method for use in a processor of the data processing system, including: subtracting an offset δ from each coefficient of the polynomial a; applying an arithmetic to Boolean (A2B) function on the arithmetic shares of each coefficient a.sub.i of the polynomial a to produce Boolean shares â.sub.i that encode the same secret value a.sub.i; and performing in parallel for all coefficients a shared binary search to determine which of coefficients a.sub.i are greater than a threshold t to produce a Boolean sharing value {circumflex over (b)} of the bitstring b where each bit of b decodes a coefficient of the polynomial a.
Apparatus and method for performing matrix multiplication operation being secure against side channel attack
A method for performing a matrix multiplication operation being secure against side-channel attacks according to one embodiment, which is performed by a computing device comprising one or more processors and a memory storing one or more programs to be executed by the one or more processors, includes shuffling an order of execution of multiplication operations between elements of a first matrix and elements of a second matrix for a matrix multiplication operation between the first matrix and the second matrix; and performing the matrix multiplication operation based on the shuffled order of execution.
MICROPROCESSOR EQUIPPED WITH AN ARITHMETIC AND LOGIC UNIT AND WITH A HARDWARE SECURITY MODULE
This microprocessor is configured to compute a code C.sub.1, used to detect an execution fault, using a relationship C.sub.i=P o F.sub.α(D.sub.i), where: F.sub.α(D.sub.i)=E.sub.0 o . . . o E.sub.q o . . . o E.sub.NbE−1(D.sub.i), E.sub.q(x)=T.sub.αm,q o . . . o T.sub.αj,q o . . . o T.sub.α1,q o T.sub.α0,q(X), and T.sub.αj,q is a conditional transposition, configured by a secret parameter α.sub.j,q, that permutes two blocks of bits B.sub.2j+1,q and B.sub.2j,q of the variable x only when the parameter a.sub.j,q is equal to a first value, the blocks B.sub.2j+1,q and B.sub.2j,q of all of the transpositions T.sub.αj,q of the stage E.sub.q being different from one another and not overlapping and the blocks B.sub.2j+1,q and B.sub.2j,q are placed within one and the same block of greater size permuted by a transposition of the higher stage E.sub.q+1.
Multi-level video processing within a vehicular communication network
A system for controlling power distribution within a vehicular communication network, including a power source equipment comprising a first port in communication with a network node module of a device, and a Power over Ethernet (POE) management module. The POE management module is configured to enable POE to the device via the first port, monitor a current draw of the device, determine whether the current draw of the device exceeds a threshold, and disable POE to the device, responsive to determining that the current draw exceeds the threshold.