Patent classifications
G06N7/046
TRANSFER LEARNING-BASED USE OF PROTEIN CONTACT MAPS FOR VARIANT PATHOGENICITY PREDICTION
The technology disclosed relates to a variant pathogenicity prediction network. The variant pathogenicity classifier includes memory, a variant encoding sub-network, a protein contact map generation sub-network, and a pathogenicity scoring sub-network. The memory stores a reference amino acid sequence of a protein, and an alternative amino acid sequence of the protein that contains a variant amino acid caused by a variant nucleotide. The variant encoding sub-network is configured to process the alternative amino acid sequence, and generate a processed representation of the alternative amino acid sequence. The protein contact map generation sub-network is configured to process the reference amino acid sequence and the processed representation of the alternative amino acid sequence, and generate a protein contact map of the protein. The pathogenicity scoring sub-network is configured to process the protein contact map, and generate a pathogenicity indication of the variant amino acid.
Subgraph tile fusion in a convolutional neural network
A method of subgraph tile fusion in a convolutional neural network, including partitioning a network into at least one subgraph node, determining a layer order of at least one layer of the at least one subgraph node, determining a input layer of the at least one subgraph node, determining a weight layer of the at least one subgraph node, determining a output layer of the at least one subgraph node and fusing the at least one subgraph node, the input layer, the weight layer and the output layer in the layer order.
Laundry scheduling apparatus and method
Disclosed is a laundry scheduling apparatus. The apparatus includes a communication unit, an output unit, and a processor configured to pair with at least one washing machine via the communication unit, obtain laundry preference parameters of a user generated by learning based on at least one of a deep learning algorithm or a machine learning algorithm, using at least one of a laundry log of the user or laundry satisfaction information of the user as input data, generate laundry scheduling information by using washing machine information about the paired at least one washing machine, the laundry preference parameters, and laundry item information obtained via at least one of a user input unit, an interface unit, or a sensor, and cause the output unit to output the laundry scheduling information.
LOW ENTROPY BROWSING HISTORY FOR CONTENT QUASI-PERSONALIZATION
The present disclosure provides systems and methods for content quasi-personalization or anonymized content retrieval via aggregated browsing history of a large plurality of devices, such as millions or billions of devices. A sparse matrix may be constructed from the aggregated browsing history, and dimensionally reduced, reducing entropy and providing anonymity for individual devices. Relevant content may be selected via quasi-personalized clusters representing similar browsing histories, without exposing individual device details to content providers.
UNIFIED FRAMEWORK FOR DYNAMIC CLUSTERING AND DISCRETE TIME EVENT PREDICTION
A single unified machine learning model (e.g., a neural network) is trained to perform both supervised event predictions and unsupervised time-varying clustering for a sequence of events (e.g., a sequence representing a user behavior) using sequences of events for multiple users using a combined loss function. The unified model can then be used for, given a sequence of events as input, predict a next event to occur after the last event in the sequence and generate a clustering result by performing a clustering operation on the sequence of events. As part of predicting the next event, the unified model is trained to predict an event type for the next event and a time of occurrence for the next event. In certain embodiments, the unified model is a neural network comprising a recurrent neural network (RNN) such as an Long Short Term Memory (LSTM) network.
Information processing apparatus, ising device, and information processing apparatus control method
Arithmetic circuits calculate d−1 energy values (h.sub.i2 to h.sub.id) indicating energies generated by 2-body to d-body coupling on the basis of a plurality of weight values indicating strength of 2-body to d-body coupling of 2 to d neurons including a first neuron whose output value is allowed to be updated and n-bit output values of n neurons. An adder circuit calculates a sum of these values, and a comparator circuit compares a value based on a sum of the sum and a noise value with a threshold, to determine the output value of the first neuron. An update circuit outputs n-bit updated output values in which one bit has been updated on the basis of a selection signal and the output value of the first neuron. The holding circuit holds the updated output values and outputs the updated output values as the n-bit output values used by the arithmetic circuits.
Table row identification using machine learning
Techniques for table row identification using machine learning are disclosed herein. For example, a method can include detecting a table body in a document by processing the document using a machine learning (ML)-based table body model; predicting an initial table row index for one or more words among a plurality of words obtained in the document, wherein the one or more words are determined to be within the table body; and determining a table row index for the one or more words using an ML-based table row model that is trained based on the predicted initial table row index for the one or more words.
Machine learning for input fuzzing
Provided are methods and systems for automatically generating input grammars for grammar-based fuzzing by utilizing machine-learning techniques and sample inputs. Neural-network-based statistical learning techniques are used for the automatic generation of input grammars. Recurrent neural networks are used for learning a statistical input model that is also generative in that the model is used to generate new inputs based on the probability distribution of the learnt model.
APPARATUS AND METHOD FOR TRAINING DEEP NEURAL NETWORK
A method for training a deep neural network according to an embodiment includes training a deep neural network model using a first data set including a plurality of labeled data and a second data set including a plurality of unlabeled data, assigning a ground-truth label value to some of the plurality of unlabeled data, updating the first data set and the second data set such that the data to which the ground-truth label value is assigned is included in the first data set, and further training the deep neural network model using the updated first data set and the updated second data set.
LOW ENTROPY BROWSING HISTORY FOR CONTENT QUASI-PERSONALIZATION
The present disclosure provides systems and methods for content quasi-personalization or anonymized content retrieval via aggregated browsing history of a large plurality of devices, such as millions or billions of devices. A sparse matrix may be constructed from the aggregated browsing history, and dimensionally reduced, reducing entropy and providing anonymity for individual devices. Relevant content may be selected via quasi-personalized clusters representing similar browsing histories, without exposing individual device details to content providers.