Patent classifications
G06N3/048
Method for generating web code for UI based on a generative adversarial network and a convolutional neural network
Provided is a method for generating web codes for a user interface (UI) based on a generative adversarial network (GAN) and a convolutional neural network (CNN). The method includes steps described below. A mapping relationship between display effects of a HyperText Markup Language (HTML) element and source codes of the HTML element is constructed. A location of an HTML element in an image I is recognized. Complete HTML codes of the image I are generated. The similarity between manually-written HTML codes and the generated complete HTML codes and the similarity between the image I and an image I.sub.1 generated by the generated complete HTML codes are obtained. After training, an image-to-HTML-code generation model M is obtained. A to-be-processed UI image is input into the model M so as to obtain corresponding HTML codes. According to the method of the present disclosure, an image-to-HTML-code generation model M can be obtained.
Weight initialization method and apparatus for stable learning of deep learning model using activation function
Provided is an artificial neural network learning apparatus for deep learning. The apparatus includes an input unit configured to acquire an input data or a training data, a memory configured to store the input data, the training data, and a deep learning artificial neural network model, and a processor configured to perform computation based on the artificial neural network model, in which the processor sets the initial weight depending on the number of nodes belonging to a first layer and the number of nodes belonging to a second layer of the artificial neural network model, and determines the initial weight by compensation by multiplying a standard deviation (σ) by a square root of a reciprocal of a probability of a normal probability distribution for a remaining section except for a section in which an output value of the activation function converges to a specific value.
Information processing apparatus, information processing method, and program
An information processing apparatus includes a sparse element detection part, a sparse location weight addition part, a multiplication part, a non-sparse data operation part, and an addition part. The sparse element detection part detects a predetermined sparse element from input data and outputs information about the sparse element. The sparse location weight addition part adds a first weight elements corresponding to the sparse element. The multiplication part multiplies an output of the sparse location weight addition part by the sparse element. The non-sparse data operation part performs an operation on non-sparse elements, each other than the sparse element in the input data. The addition part adds an output of the multiplication part and an output of the non-sparse data operation part.
Dynamic quantization for deep neural network inference system and method
A method for dynamically quantizing feature maps of a received image. The method includes convolving an image based on a predicted maximum value, a predicted minimum value, trained kernel weights and the image data. The input data is quantized based on the predicted minimum value and predicted maximum value. The output of the convolution is computed into an accumulator and re-quantized. The re-quantized value is output to an external memory. The predicted min value and the predicted max value are computed based on the previous max values and min values with a weighted average or a pre-determined formula. Initial min value and max value are computed based on known quantization methods and utilized for initializing the predicted min value and predicted max value in the quantization process.
Method for the classification of a biometric trait represented by an input image
The present invention relates to a method for classifying a biometric trait represented by an input image, the method being characterized in that it comprises the implementation, by data processing means (21) of a client (2), of the steps of: (a) Determining, for each of a predefined set of possible general patterns of biometric traits, by means of a convolutional neural network, CNN, whether said biometric trait presents or not said general pattern.
Systems and methods for encrypting data and algorithms
Systems, methods, and computer-readable media for achieving privacy for both data and an algorithm that operates on the data. A system can involve receiving an algorithm from an algorithm provider and receiving data from a data provider, dividing the algorithm into a first algorithm subset and a second algorithm subset and dividing the data into a first data subset and a second data subset, sending the first algorithm subset and the first data subset to the algorithm provider and sending the second algorithm subset and the second data subset to the data provider, receiving a first partial result from the algorithm provider based on the first algorithm subset and first data subset and receiving a second partial result from the data provider based on the second algorithm subset and the second data subset, and determining a combined result based on the first partial result and the second partial result.
Transaction-enabled systems and methods for royalty apportionment and stacking
Transaction-enabled systems and methods for royalty apportionment and stacking are disclosed. An example system may include a plurality of royalty generating elements (a royalty stack) each related to a corresponding one or more of a plurality of intellectual property (IP) assets (an aggregate stack of IP). The system may further include a royalty apportionment wrapper to interpret IP licensing terms and apportion royalties to a plurality of owning entities corresponding to the aggregate stack of IP in response to the IP licensing terms and a smart contract wrapper. The smart contract wrapper is configured to access a distributed ledger, interpret an IP description value and IP addition request, to add an IP asset to the aggregate stack of IP, and to adjust the royalty stack.
METHOD FOR PREDICTING RETROSYNTHESIS OF A COMPOUND MOLECULE AND RELATED APPARATUS
A method for predicting retrosynthesis of a compound molecule and a related apparatus. The method includes: obtaining a target molecule and determining the target molecule as a root node in a tree structure, then, expanding the first leaf node through a target retrosynthesis model to obtain a plurality of second leaf nodes, further, recursively processing the predicted molecule set corresponding to the second leaf nodes and determining a terminal node that satisfies a preset condition; and then, traversing path information corresponding to the terminal node to determine a retrosynthetic path of the target molecule. In this way, a retrosynthesis prediction process of a multi-step reaction is realized. Leaf nodes are gradually recursively expanded and screened, to ensure the reliability of reactants determined by the retrosynthesis prediction process of the multi-step reaction, thereby improving the accuracy of prediction of retrosynthesis of compound molecules.
METHOD AND SYSTEM FOR GENERATING A PREDICTIVE MODEL
A method for generating a predictive model for quantization parameters of a neural network is described. The method comprises accessing a first vector of data values corresponding to input values to a first layer implemented in a neural network, generating a feature vector of one or more features extracted from the data values of the first vector, accessing a second vector of data values corresponding to the input values of a second layer implemented in the neural network, subsequent to the first layer, generating a target vector of data values comprising one or more quantization parameters for the second layer, from the data values of the second vector, evaluating, on the basis of the feature vector and the target vector, a predictive model for predicting the one or more quantization parameters of the second layer and modifying the predictive model on the basis of the evaluation.
SPARSE MATRIX OPERATIONS FOR DEEP LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for parallelizing matrix operations. One of the methods includes implementing a neural network on a parallel processing device, the neural network comprising at least one sparse neural network layer, the sparse neural network layer being configured to receive an input matrix and perform matrix multiplication between the input matrix and a sparse weight matrix to generate an output matrix, the method comprising: for each row of the M rows of the output matrix, determining a plurality of tiles that each include one or more elements from the row; assigning, for each tile of each row, the tile to a respective one of a plurality of thread blocks of the parallel processing device; and computing, for each tile, respective values for each element in the tile using the respective thread block to which the tile was assigned.