Patent classifications
G06F7/523
Zero-knowledge proof method and electronic device
Disclosed is a method and an apparatus a zero-knowledge proof and an electronic device. That method comprise the following steps: selecting a data processing relationship, and processing private data and public data to obtain a calculation result; respectively committing the private data and the calculation result according to a commitment parameter to obtain a first commitment value and a second commitment value, wherein the commitment parameter is generated by a trusted third party; generating a non-interactive zero-knowledge proof according to the data processing relationship; wherein the commitment parameter, the first commitment value and the second commitment value are used by a verifier to verify the non-interactive zero-knowledge proof. The present disclosure solves the technical problem that bilinear pairing cannot be used in the scenario where bilinear pairing cannot be used in related technologies.
Methods and apparatus to correct misattributions of media impressions
Methods, apparatus, and articles of manufacture to correct misattributions of media impressions are disclosed. An example method includes obtaining first demographic-based impressions via a beacon transmitted in response to access to content by a first set of panelists, obtaining, from a database proprietor, second demographic-based impressions of the content on a second set of persons, forming a pseudo-inverse matrix determined based in part on the first impressions, and having a truncated value and a damped value to form third demographic-based impressions of the content on the second set of persons based on the second impressions, and computing at least partially corrected demographic-based impressions values by multiplying a vector of database proprietor impression data by the pseudo-inverse matrix.
Labeling a dataset
A method, system and computer program product, the method comprising: obtaining a first model trained upon cases and labels, the first model providing a prediction in response to an input case; obtaining a second model trained using the cases and indications whether a predictions of the first model are correct, the second model providing a correctness prediction for the first; determining a case for which the second model predicts that the first provides an incorrect prediction; further training the first model also on a first corpus including the case and a label, thereby improving performance of the first model; providing the case to the first model to obtain a first prediction; and further training the second model also on a second corpus including the case and a correctness label, the correctness label being “correct” if the first prediction is equal to the label, thereby improving performance of the second model.
Labeling a dataset
A method, system and computer program product, the method comprising: obtaining a first model trained upon cases and labels, the first model providing a prediction in response to an input case; obtaining a second model trained using the cases and indications whether a predictions of the first model are correct, the second model providing a correctness prediction for the first; determining a case for which the second model predicts that the first provides an incorrect prediction; further training the first model also on a first corpus including the case and a label, thereby improving performance of the first model; providing the case to the first model to obtain a first prediction; and further training the second model also on a second corpus including the case and a correctness label, the correctness label being “correct” if the first prediction is equal to the label, thereby improving performance of the second model.
APPARATUS FOR PROCESSING RECEIVED DATA
To speed up decoding of a range code. A decompression circuit calculates a plurality of candidate bit values for each bit of the N-bit string based on a plurality of possible bit histories of a bit before a K-th bit in parallel for a plurality of bits, and repeatedly selects a correct bit value of the K-th bit from the plurality of candidate bit values based on a correct bit history of the bit before the K-th bit to decode the N-bit string.
APPARATUS FOR PROCESSING RECEIVED DATA
To speed up decoding of a range code. A decompression circuit calculates a plurality of candidate bit values for each bit of the N-bit string based on a plurality of possible bit histories of a bit before a K-th bit in parallel for a plurality of bits, and repeatedly selects a correct bit value of the K-th bit from the plurality of candidate bit values based on a correct bit history of the bit before the K-th bit to decode the N-bit string.
ANALYSIS APPARATUS, ANALYSIS METHOD AND PROGRAM
An analysis apparatus according to one embodiment includes: an obtainment unit configured to obtain a data set of multiple data items having randomness; and an analysis unit configured to calculate, as an inner product or a norm of probability measures μ and ν being probability measures on the data set and taking values in a von Neumann algebra, by using a mapping Φ that extends kernel mean embedding, an inner product or a norm of Φ(μ) and Φ(ν) mapped onto an RKHM.
ANALYSIS APPARATUS, ANALYSIS METHOD AND PROGRAM
An analysis apparatus according to one embodiment includes: an obtainment unit configured to obtain a data set of multiple data items having randomness; and an analysis unit configured to calculate, as an inner product or a norm of probability measures μ and ν being probability measures on the data set and taking values in a von Neumann algebra, by using a mapping Φ that extends kernel mean embedding, an inner product or a norm of Φ(μ) and Φ(ν) mapped onto an RKHM.
NEURAL NETWORK ACCELERATOR, ACCELERATION METHOD, AND APPARATUS
A neural network accelerator is provided, including: a preprocessing module (301), configured to perform first forward winograd transform on a target matrix corresponding to an input feature map, to obtain a transformed target matrix, where the preprocessing module (301) is further configured to perform second forward winograd transform on a convolution kernel, to obtain a transformed convolution kernel; a matrix operation module (302), configured to perform a matrix multiplication operation on a first matrix and a second matrix, to obtain a multiplication result, where the first matrix is constructed based on the transformed target matrix, and the second matrix is constructed based on the transformed convolution kernel; and a vector operation module (303), configured to perform inverse winograd transform on the multiplication result, to obtain an output feature map.
NEURAL NETWORK ACCELERATOR, ACCELERATION METHOD, AND APPARATUS
A neural network accelerator is provided, including: a preprocessing module (301), configured to perform first forward winograd transform on a target matrix corresponding to an input feature map, to obtain a transformed target matrix, where the preprocessing module (301) is further configured to perform second forward winograd transform on a convolution kernel, to obtain a transformed convolution kernel; a matrix operation module (302), configured to perform a matrix multiplication operation on a first matrix and a second matrix, to obtain a multiplication result, where the first matrix is constructed based on the transformed target matrix, and the second matrix is constructed based on the transformed convolution kernel; and a vector operation module (303), configured to perform inverse winograd transform on the multiplication result, to obtain an output feature map.