G06N3/048

MOVEMENT DATA FOR FAILURE IDENTIFICATION

Configurations for data center component monitoring are disclosed. In at least one embodiment, movement of a server component is determined based on sensor data and the movement is used to diagnose a root cause for a server component failure.

CARDIOGRAM COLLECTION AND SOURCE LOCATION IDENTIFICATION
20230049769 · 2023-02-16 ·

Systems are provided for generating data representing electromagnetic states of a heart for medical, scientific, research, and/or engineering purposes. The systems generate the data based on source configurations such as dimensions of, and scar or fibrosis or pro-arrhythmic substrate location within, a heart and a computational model of the electromagnetic output of the heart. The systems may dynamically generate the source configurations to provide representative source configurations that may be found in a population. For each source configuration of the electromagnetic source, the systems run a simulation of the functioning of the heart to generate modeled electromagnetic output (e.g., an electromagnetic mesh for each simulation step with a voltage at each point of the electromagnetic mesh) for that source configuration. The systems may generate a cardiogram for each source configuration from the modeled electromagnetic output of that source configuration for use in predicting the source location of an arrhythmia.

CONTINUOUS MACHINE LEARNING METHOD AND SYSTEM FOR INFORMATION EXTRACTION

Methods and systems for artificial intelligence (AI)-assisted document annotation and training of machine learning-based models for document data extraction are described. The methods and systems described herein take advantage of a continuous machine learning approach to create document processing pipelines that provide accurate and efficient data extraction from documents that include structured text, semi-structured text, unstructured text, or any combination thereof.

MACHINE LEARNING MODELS WITH EFFICIENT FEATURE LEARNING
20230046601 · 2023-02-16 ·

A method can be used to predict risk using machine learning models having efficient feature learning. A risk prediction model can be applied to time-series data associated with a target entity to generate a risk indicator. The risk prediction model can include a feature learning model for generating features from the time-series data. The risk prediction model can also include a risk classification model for generating the risk indicator. The feature learning model can include filters and can be trained. Parameters of the risk prediction model can be adjusted to minimize a loss function associated with risk indicators. An updated risk prediction model can be generated by removing a filter from an original set of filters based on influencing scores of the original filters. The risk indicator can be transmitted to a computing device for use in controlling access of the target entity to a computing environment.

Multimodal based punctuation and/or casing prediction

Techniques for predicting punctuation and casing using multimodal fusion are described. An exemplary method includes processing generated text by: tokenizing the generated text into sub-words, and generating a sequence of lexical features for the sub-words using a pre-trained lexical encoder; processing audio of the audio by: generating a sequence of frame level acoustic embeddings using a pre-trained acoustic encoder on the audio, and generating task specific embeddings from the frame level acoustic embeddings; performing multimodal fusion of the sub-word level acoustic embeddings and the sequence of lexical features by: aligning the task specific embeddings to the sequence of lexical features, and combining the sequence of lexical features and aligned acoustic sequence; predicting punctuation and casing from the combined sequence of lexical features and aligned acoustic sequence; concatenating the sub-words of the text, and applying the predicted punctuation and casing; and outputting text having the predicted punctuation and casing.

Method and system for interactive, interpretable, and improved match and player performance predictions in team sports

A method of generating an outcome for a sporting event is disclosed herein. A computing system retrieves tracking data from a data store. The computing system generates a predictive model using a deep neural network. The one or more neural networks of the deep neural network generates one or more embeddings comprising team-specific information and agent-specific information based on the tracking data. The computing system selects, from the tracking data, one or more features related to a current context of the sporting event. The computing system learns, by the deep neural network, one or more likely outcomes of one or more sporting events. The computing system receives a pre-match lineup for the sporting event. The computing system generates, via the predictive model, a likely outcome of the sporting event based on historical information of each agent for the home team, each agent for the away team, and team-specific features.

Reinforcement learning for concurrent actions

A computer-implemented method comprises instantiating a policy function approximator. The policy function approximator is configured to calculate a plurality of estimated action probabilities in dependence on a given state of the environment. Each of the plurality of estimated action probabilities corresponds to a respective one of a plurality of discrete actions performable by the reinforcement learning agent within the environment. An initial plurality of estimated action probabilities in dependence on a first state of the environment are calculated. Two or more of the plurality of discrete actions are concurrently performed within the environment when the environment is in the first state. In response to the concurrent performance, a reward value is received. In response to the received reward value being greater than a baseline reward value, the policy function approximator is updated, such that it is configured to calculate an updated plurality of estimated action probabilities.

Deep learning based methods and systems for nucleic acid sequencing

Methods and systems for determining a plurality of sequences of nucleic acid (e.g., DNA) molecules in a sequencing-by-synthesis process are provided. In one embodiment, the method comprises obtaining images of fluorescent signals obtained in a plurality of synthesis cycles. The images of fluorescent signals are associated with a plurality of different fluorescence channels. The method further comprises preprocessing the images of fluorescent signals to obtain processed images. Based on a set of the processed images, the method further comprises detecting center positions of clusters of the fluorescent signals using a trained convolutional neural network (CNN) and extracting, based on the center positions of the clusters of fluorescent signals, features from the set of the processed images to generate feature embedding vectors. The method further comprises determining, in parallel, the plurality of sequences of DNA molecules using the extracted features based on a trained attention-based neural network.

Systems and methods for artificial intelligence discovered codes

Systems and methods for artificial intelligence discovered codes are described herein. A method includes obtaining received samples from a receive decoder, obtaining decoded bits from the receive decoder based on the receiver samples, training an encoder neural network of a transmit encoder, the encoder neural network receiving parameters that comprise the information bits, the received samples, and the decoded bits. The encoder neural network is optimized using a loss function applied to the decoded bits and the information bits to calculate a forward error correcting code.

Accelerated deep learning

Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency, such as accuracy of learning, accuracy of prediction, speed of learning, performance of learning, and energy efficiency of learning. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has processing resources and memory resources. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Stochastic gradient descent, mini-batch gradient descent, and continuous propagation gradient descent are techniques usable to train weights of a neural network modeled by the processing elements. Reverse checkpoint is usable to reduce memory usage during the training.