Patent classifications
G06N20/10
Reinforcement learning for concurrent actions
A computer-implemented method comprises instantiating a policy function approximator. The policy function approximator is configured to calculate a plurality of estimated action probabilities in dependence on a given state of the environment. Each of the plurality of estimated action probabilities corresponds to a respective one of a plurality of discrete actions performable by the reinforcement learning agent within the environment. An initial plurality of estimated action probabilities in dependence on a first state of the environment are calculated. Two or more of the plurality of discrete actions are concurrently performed within the environment when the environment is in the first state. In response to the concurrent performance, a reward value is received. In response to the received reward value being greater than a baseline reward value, the policy function approximator is updated, such that it is configured to calculate an updated plurality of estimated action probabilities.
Systems for real-time intelligent haptic correction to typing errors and methods thereof
Systems and methods of the present disclosure enable context-aware haptic error notifications. The systems and methods include a processor to receive input segments into a software application from a character input component and determine a destination. A context identification model predicts a context classification of the input segments based at least in part on the software application and the destination. Potential errors are determined in the input segments based on the context classification. An error characterization machine learning model determines an error type classification and an error severity score associated with each potential error and a haptic feedback pattern is determined for each potential error based on the error type classification and the error severity score of each potential error of the one or more potential errors. And a haptic event latency is determined based on the error type classification and the error severity score of each potential error.
Computational framework for modeling of physical process
Techniques, systems, and devices are described for providing a computational frame for estimating high-dimensional stochastic behaviors. In one exemplary aspect, a method for performing numerical estimation includes receiving a set of measurements of a stochastic behavior. The set of correlated measurements follows a non-standard probability distribution and is non-linearly correlated. Also, a non-linear relationship exists between a set of system variables that describes the stochastic behavior and a corresponding set of measurements. The method includes determining, based on the set of measurements, a numerical model of the stochastic behavior. The numerical model comprises a feature space comprising non-correlated features corresponding to the stochastic behavior. The non-correlated features have a dimensionality of M and the set of measurements has a dimensionality of N, M being smaller than N. The method includes generating a set of approximated system variables corresponding to the set of measurements based on the numerical model.
Complex-valued neural network with learnable non-linearities in medical imaging
For machine training and application of a trained complex-valued machine learning model, an activation function of the machine learning model, such as a neural network, includes a learnable parameter that is complex or defined in a complex domain with two dimensions, such as real and imaginary or magnitude and phase dimensions. The complex learnable parameter is trained for any of various applications, such as MR fingerprinting, other medical imaging, or non-medical uses.
Systems and methods for determining relative importance of one or more variables in a nonparametric machine learning model
Systems and methods for determining relative importance of one or more variables in a non-parametric model include: receiving, raw values of the variables corresponding to one or more entities; processing the raw values using a statistical model to obtain probability values for the variables and an overall prediction value for each entity; determining a plurality of cumulative distributions for the variables based on the raw values and the number of entities having a specific raw value; grouping the variables into a plurality of equally sized buckets based on the cumulative distributions; determining a mean probability value for each bucket; assigning a rank number for each bucket based on the mean probability values; compiling a table for the entities based on the raw values and the buckets corresponding to the raw values; and determining the relative importance of the variables for the entities based on the rank numbers.
Systems and methods for determining relative importance of one or more variables in a nonparametric machine learning model
Systems and methods for determining relative importance of one or more variables in a non-parametric model include: receiving, raw values of the variables corresponding to one or more entities; processing the raw values using a statistical model to obtain probability values for the variables and an overall prediction value for each entity; determining a plurality of cumulative distributions for the variables based on the raw values and the number of entities having a specific raw value; grouping the variables into a plurality of equally sized buckets based on the cumulative distributions; determining a mean probability value for each bucket; assigning a rank number for each bucket based on the mean probability values; compiling a table for the entities based on the raw values and the buckets corresponding to the raw values; and determining the relative importance of the variables for the entities based on the rank numbers.
Method and device for optimizing neural network
The embodiments of this application provide a method and device for optimizing neural network. The method includes: binarizing and bit-packing input data of a convolution layer along a channel direction, and obtaining compressed input data; binarizing and bit-packing respectively each convolution kernel of the convolution layer along the channel direction, and obtaining each corresponding compressed convolution kernel; dividing the compressed input data sequentially in a convolutional computation order into blocks of the compressed input data with the same size of each compressed convolution kernel, wherein the data input to one time convolutional computation form a data block; and, taking a convolutional computation on each block of the compressed input data and each compressed convolution kernel sequentially, obtaining each convolutional result data, and obtaining multiple output data of the convolution layer according to each convolutional result data.
Method and device for optimizing neural network
The embodiments of this application provide a method and device for optimizing neural network. The method includes: binarizing and bit-packing input data of a convolution layer along a channel direction, and obtaining compressed input data; binarizing and bit-packing respectively each convolution kernel of the convolution layer along the channel direction, and obtaining each corresponding compressed convolution kernel; dividing the compressed input data sequentially in a convolutional computation order into blocks of the compressed input data with the same size of each compressed convolution kernel, wherein the data input to one time convolutional computation form a data block; and, taking a convolutional computation on each block of the compressed input data and each compressed convolution kernel sequentially, obtaining each convolutional result data, and obtaining multiple output data of the convolution layer according to each convolutional result data.
Scalable attributed graph embedding for large-scale graph analytics
A computer-implemented method for calculating Scalable Attributed Graph Embedding for Large-Scale Graph Analytics that includes computing a node embedding for a first node-attributed graph in a node embedded space. One or more random attributed graphs is generated in the node embedded space. A graph embedding operation is performed using a dissimilarity measure between one or more raw graphs and the one or more generated random graphs, and an edge-attributed graph into a second node-attributed graph using an adjoint graph.
Differentiating between live and spoof fingers in fingerprint analysis by machine learning
The present disclosure relates to a method performed in a fingerprint analysis system for facilitating differentiating between a live finger and a spoof finger. The method comprises acquiring a plurality of time-sequences of images, each of the time-sequences showing a respective finger as it engages a detection surface of a fingerprint sensor. Each of the time-sequences comprises at least a first image and a last image showing a fingerprint topography of the finger, wherein the respective fingers of some of the time-sequences are known to be live fingers and the respective fingers of some other of the time-sequences are known to be spoof fingers. The method also comprises training a machine learning algorithm on the plurality of time-sequences to produce a model of the machine learning algorithm for differentiating between a live finger and a spoof finger.