Patent classifications
G06N3/006
SYSTEM FOR PRESERVING IMAGE AND ACOUSTIC SENSITIVITY USING REINFORCEMENT LEARNING
Systems, computer program products, and methods are described herein for preserving image and acoustic sensitivity using reinforcement learning. The present invention is configured to initiate a file editing engine on the audiovisual file to separate the audiovisual file into a video component and an audio component; initiate a convolutional neural network (CNN) algorithm on the video component to identify one or more sensitive portions in the one or more image frames; initiate an audio word2vec algorithm on the audio component to identify one or more sensitive portions in the audio component; initiate a masking algorithm on the one or more image frames and the audio component; generate a masked video component and a masked audio component based on at least implementing the masking action policy; and bind, using the file editing engine, the masked video component and the masked audio component to generate a masked audiovisual file.
SYSTEM AND METHOD FOR MULTI-TASK LIFELONG LEARNING ON PERSONAL DEVICE WITH IMPROVED USER EXPERIENCE
This disclosure relates to recommendations made to users based on learned behavior patterns. User behavior data is collected and grouped according labels. The grouped user behavior data is labeled and used to train a machine learning model based on features and tasks associated with the classification. User behavior is then predicted by applying the trained machine learning model to the collected user behavior data, and a task is recommended to the user.
METHOD AND SYSTEM FOR TRAINING A MACHINE LEARNING MODEL
An initially trained machine learning model is used by an active learning module to generate candidate triples, which are fed into an expert system for verification. As a result, the expert system outputs novel facts that are used for retraining the machine learning model. This approach consolidates expert systems with machine learning through iterations of an active learning loop, by bringing the two paradigms together, which is in general difficult because training of a neural network (machine learning) requires differentiable functions and rules (used by expert systems) tend not to be differentiable. The method and system provide a data augmentation strategy where the expert system acts as an oracle and outputs the novel facts, which provide labels for the candidate triples. The novel facts provide critical information from the oracle that is injected into the machine learning model at the retraining stage, thus allowing to increase its generalization performance.
SYSTEM AND METHOD FOR LEARNING TO GENERATE CHEMICAL COMPOUNDS WITH DESIRED PROPERTIES
A system and method for generating libraries of chemical compounds having desired and specific properties by formulating a reaction-based mechanism that may be powered by several algorithms including but not limited to genetic algorithm, expert iteration algorithms, planning methods, reinforcement learning and machine learning algorithms. The system and method may also provide the process steps by which these optimized products S′ may be synthesized from the reactants R1,R2 and further enables a rapid and efficient search of the synthetically accessible chemical space.
CONFIGURING A NEURAL NETWORK FOR EQUIVARIANT OR INVARIANT BEHAVIOR
A method for configuring a neural network which is designed to map measured data to one or more output variables. The method includes: transformation(s) of the measured data is/are specified which when applied to the measured data, is/are meant to induce the output variables supplied by the neural network to exhibit an invariant or equivariant behavior; at least one equation is set up which links a condition that the desired invariance or equivariance be given with the architecture of the neural network; by solving the at least one equation a feature is obtained that characterizes the desired architecture and/or a distribution of weights of the neural network in at least one location of this architecture; a neural network is configured in such a way that its architecture and/or its distribution of weights in at least one location of this architecture has/have all of the features ascertained in this way.
Machine learning assisted source code refactoring to mitigate anti-patterns
Techniques are described for enabling the automatic refactoring of software application source code to mitigate identified anti-patterns and other software modernization-related issues. A software modernization system analyzes software applications to generate various types of modernization report information, where the report information can include identifications of various types of design and cloud anti-patterns, proposed decompositions of monolithic applications into subunits, refactoring cost information, recommended modernization tools and migration paths, among other such information. A software modernization system further includes a refactoring engine that can automatically refactor source code based on such application analysis information, e.g., to automatically address identified anti-patterns, restructure code for decomposition, etc. A refactoring engine performs refactoring actions based on refactoring templates, machine learning (ML) refactoring models, or other input.
Fault resilient airborne network
A fault resilient airborne network includes a plurality of aircraft system components installed within an aircraft and at least one agent in communication with the plurality of aircraft system components during in-flight operation of the aircraft. The at least one agent is configured to monitor an aircraft system component for a fault, observe a fault within the aircraft system component, and provide reconfiguration instructions to the aircraft system component in response to the observed fault. The at least one agent is further configured to predict a life expectancy of the aircraft system component using machine learning models while monitoring the aircraft system component for a fault, and provide reconfiguration instructions to the aircraft system component when the life expectancy of the aircraft system component meets a threshold. The reconfiguration instructions are configured to cause an adjustment in at least some of the plurality of aircraft system components.
Method and system for synchronizing virtual and real statuses of digital twin system of unmanned aerial vehicle (UAV)
A method for synchronizing virtual and real statuses of a digital twin system of an unmanned aerial vehicle (UAV) includes: performing parameter configuration for a virtual object system and a physical object system of the UAV; performing time synchronization between the virtual object system and the physical object system; detecting an event trigger type, wherein the event trigger type is a training event or a monitoring event; and triggering a corresponding synchronization controller based on the detected event trigger type, such that the synchronization controller performs result synchronization and process synchronization for the virtual object system and the physical object system based on the event trigger type, where a synchronization controller corresponding to the training event is a controller for synchronizing a physical object to a virtual object, and a synchronization controller corresponding to the monitoring event is a controller for synchronizing the virtual object to the physical object.
Reinforcement learning using a relational network for generating data encoding relationships between entities in an environment
A neural network system is proposed, including an input network for extracting, from state data, respective entity data for each a plurality of entities which are present, or at least potentially present, in the environment. The entity data describes the entity. The neural network contains a relational network for parsing this data, which includes one or more attention blocks which may be stacked to perform successive actions on the entity data. The attention blocks each include a respective transform network for each of the entities. The transform network for each entity is able to transform data which the transform network receives for the entity into modified entity data for the entity, based on data for a plurality of the other entities. An output network is arranged to receive data output by the relational network, and use the received data to select a respective action.
Reinforcement learning for concurrent actions
A computer-implemented method comprises instantiating a policy function approximator. The policy function approximator is configured to calculate a plurality of estimated action probabilities in dependence on a given state of the environment. Each of the plurality of estimated action probabilities corresponds to a respective one of a plurality of discrete actions performable by the reinforcement learning agent within the environment. An initial plurality of estimated action probabilities in dependence on a first state of the environment are calculated. Two or more of the plurality of discrete actions are concurrently performed within the environment when the environment is in the first state. In response to the concurrent performance, a reward value is received. In response to the received reward value being greater than a baseline reward value, the policy function approximator is updated, such that it is configured to calculate an updated plurality of estimated action probabilities.