Patent classifications
G06F18/21345
METHOD AND APPARATUS WITH NEURAL NETWORK OPERATION USING SPARSIFICATION
A processor-implemented neural network operation method includes: receiving a first activation gradient and a first threshold corresponding to a layer included in a neural network; sparsifying the first activation gradient based on the first threshold; determining a second activation gradient by performing a neural network operation based on the sparsified first activation gradient; determining a second threshold by updating the first threshold based on the second activation gradient; and performing a neural network operation based on the second activation gradient and the second threshold.
Method for inserting domain information, method and apparatus for learning of generative model
An apparatus for learning of generative model according to an embodiment includes an encoder configured to extract a feature from input data and output a feature vector, a decoder configured to restore the input data on the basis of the feature vector, and a domain module configured to generate domain information to be learned through a generative model into domain vector blocks, each with a size corresponding to the feature vector, concatenate the feature vector and the domain vector block, and input a concatenated vector to the decoder.
DISTRIBUTED DATA INTEGRATION DEVICE, DISTRIBUTED DATA INTEGRATION METHOD, AND PROGRAM
A distributed data integration device includes an acquisition unit configured to acquire, for a piece of analysis target data, an anchor data intermediate representation and an analysis target intermediate representation, the anchor data intermediate representation being an intermediate representation obtained by converting anchor data by a first function, the anchor data being data commonly used in integration of a plurality of the pieces of analysis target data that are distributed, the analysis target intermediate representation being an intermediate representation obtained by converting the analysis target data by the first function, an anchor data conversion unit configured to convert, for the piece of analysis target data, a plurality of the anchor data intermediate representations by a second function, a calculation unit configured to calculate, for the piece of analysis target data, the second function that minimizes a difference between the plurality of the anchor data intermediate representations, and an analysis target data conversion unit configured to convert, for the piece of analysis target data, the analysis target intermediate representation by the second function.
Mixed data fingerprinting with principal components analysis
Principal components analysis is applied to data sets to fingerprint the dataset or to compare the dataset to a “wild file” that may have been constructed from data found in the dataset. Principal components analysis allows for the reduction of data used for comparison down to a parsimonious compressed signature of a dataset. Datasets with different patterns among the variables will have different patterns of principal components. The principal components of variables (or a relevant subset thereof) in a wild file may be computed and statistically compared to the principal components of identical variables in a data provider's reference file to provide a score. This constitutes a unique and compressed signature of a file that can be used for identification and comparison with similarly defined patterns from other files.
METHODS OF PROVIDING TRAINED HYPERDIMENSIONAL MACHINE LEARNING MODELS HAVING CLASSES WITH REDUCED ELEMENTS AND RELATED COMPUTING SYSTEMS
A method of providing a trained machine learning model can include providing a trained non-binary hyperdimensional machine learning model that includes a plurality of trained hypervector classes, wherein each of the trained hypervector classes includes N elements, and then, eliminating selected ones of the N elements from the trained non-binary hyperdimensional machine learning model based on whether the selected element has a similarity with other ones of the N elements, to provide a sparsified trained non-binary hyperdimensional machine learning model.
Vehicle system prognosis device and method
A method for determining a vehicle system prognosis includes detecting a predetermined characteristic of a vehicle with one or more sensors, receiving a plurality of sensor signals from the one or more sensors and determining an input time series of data based on the sensor signals, clustering a matrix of time series data, generated from the input time series of data, into a predetermined number of hyperplanes, extracting extracted features that are indicative of an operation of a vehicle system from a sparse temporal matrix based on data point behavior with respect to two or more hyperplanes within the sparse temporal matrix and determining an operational status of the vehicle system based on the extracted features, the sparse temporal matrix being based on the predetermined number of hyperplanes; and communicating the operational status of the vehicle system to an operator or crew member of the vehicle.
METHODS AND APPARATUS FOR EXTRACTING PROFILES FROM THREE-DIMENSIONAL IMAGES
The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud. A 3D region of interest is determined that includes a width along a first axis, a height along a second axis, and a depth along a third axis. The 3D points within the 3D region of interest are represented as a set of 2D points based on coordinate values of the first and second axes. The 2D points are grouped into a plurality of 2D bins arranged along the first axis. For each 2D bin, a representative 2D position is determined based on the associated set of 2D points. Each of the representative 2D positions are connected to neighboring representative 2D positions to generate the 2D profile.
METHOD FOR INSERTING DOMAIN INFORMATION, METHOD AND APPARATUS FOR LEARNING OF GENERATIVE MODEL
An apparatus for learning of generative model according to an embodiment includes an encoder configured to extract a feature from input data and output a feature vector, a decoder configured to restore the input data on the basis of the feature vector, and a domain module configured to generate domain information to be learned through a generative model into domain vector blocks, each with a size corresponding to the feature vector, concatenate the feature vector and the domain vector block, and input a concatenated vector to the decoder.
NETWORK STATE MODELLING
Apparatuses and methods in a communication system are disclosed. In a network element, an encoder module obtains as an input network data that is representative of the current condition of the communications network, the network data comprising a plurality of values indicative of the performance of network elements and performs (800) feature reduction providing at its output a set of activations. A clustering module performs (802) batch normalisation and an amplitude limitation to the output of the encoder module to obtain normalised activations. A clustering control module calculates a projection of the normalised activations and determines (804) a clustering loss. A decoder module calculates (806) a reconstruction loss. The network element backpropagates the reconstruction loss and the clustering loss through the modules.
LEARNING APPARATUS, IDENTIFICATION APPARATUS, METHODS THEREOF, AND PROGRAM
By using training data containing tuples of texts for M types of tasks in N types of languages and correct labels of the texts as input, an optimized parameter group that defines N inter-task shared transformation functions (n) corresponding to the N types of languages n and M inter-language shared transformation functions (m) corresponding to the M types of tasks in is obtained. At least one of N and M is an integer greater than or equal to 2, each (n) outputs a latent vector, which corresponds to the contents of an input text in a certain language n but does not depend on the language n, to (1), . . . (M), and each (m) uses, as input, the latent vector output from any one of (1), . . . (N) and outputs an output label corresponding to the latent vector for a certain task in.