G05B2219/33056

Selectively activating a resource by detecting emotions through context analysis

A method selectively activates a resource to accommodate an advanced emotion. A supervisor computer receives a first piece of content, and then applies an emotion classifier to the first piece of content in order to create a first concept/emotion/sentiment/time tuple. The supervisor computer creates a second concept/emotion/sentiment/time tuple for a second piece of content, and compares the first and second tuples. If the concept in the first piece of content matches the concept in the second piece of content but that at least one of the emotion, sentiment, and time of the first piece of content does not match the emotion, sentiment, and time of the second piece of content, the supervisor computer determines that the emotion of the second piece of content is an advanced emotion that is not expressed by the first or second pieces of content, and activates a resource that accommodates the advanced emotion.

Viewpoint invariant visual servoing of robot end effector using recurrent neural network

Training and/or using a recurrent neural network model for visual servoing of an end effector of a robot. In visual servoing, the model can be utilized to generate, at each of a plurality of time steps, an action prediction that represents a prediction of how the end effector should be moved to cause the end effector to move toward a target object. The model can be viewpoint invariant in that it can be utilized across a variety of robots having vision components at a variety of viewpoints and/or can be utilized for a single robot even when a viewpoint, of a vision component of the robot, is drastically altered. Moreover, the model can be trained based on a large quantity of simulated data that is based on simulator(s) performing simulated episode(s) in view of the model. One or more portions of the model can be further trained based on a relatively smaller quantity of real training data.

Reinforcement learning for chatbots

A computer-implemented method for generating and deploying a reinforced learning model to train a chatbot. The method includes selecting a plurality of conversations, wherein each conversation includes an agent and a user. The method includes identifying, in each of the conversations, a set of turns and on or more topics. The method further includes associating one or more topics to each turn of the set of turns. The method includes, generating a conversation flow for each conversation, wherein the conversation flow identifies a sequence of the topics. The method includes applying an outcome score to each conversation. The method includes creating a reinforced learning (RL) model, wherein the RL model includes a Markov is based on the conversation flow of each conversation and the outcome score of each conversation. The method includes deploying the RL model, wherein the deploying includes sending the RL model to a chatbot.

Integrating sensor streams for robotic demonstration learning

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for integrating sensor streams for robotic demonstration learning. One of the methods includes selecting, by a learning system for a robot, a base update rate for combining multiple sensor streams into a task state representation. The learning system repeatedly generates the task state representation at the base update rate, including combining, during each time period defined by the update rate, the task state representation from most recently updated sensor data processed by the plurality of neural networks. The learning system repeatedly uses the task state representations to generate commands for the robot at the base update rate.

METHOD FOR COMPUTER-IMPLEMENTED CONFIGURATION OF A CONTROLLED DRIVE APPLICATION OF A LOGISTICS SYSTEM

A method for configuration of a controlled drive application of a logistics system. The logistics system includes parallel conveying paths for piece goods. Each conveying path includes sub-conveying paths which are each accelerated or delayed to merge the piece goods on a single output conveying path with defined spacing. A system model of the logistics system is firstly determined by operating data of the logistics system which include sensor values of the logistics system and changes to control variables. A control function is determined, which includes configuration data for the drives, with at least one control action being performed on the precondition of one or more performance features that are to be achieved in the system model, during which control action the operating data is simulated for a plurality of time steps.

User feedback for robotic demonstration learning

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for providing user feedback for robotic demonstration learning. One of the methods includes initiating a local demonstration learning process to collect respective local demonstration data for each of one or more demonstration subtasks defined by a skill template to be executed by a robot. Local demonstration data is repeatedly collected for each of the one or more demonstration subtasks of the skill template while a user manipulates a robot to perform each of the one or more demonstration subtasks defined by the skill template. A respective progress value for each of the one or more demonstration subtasks defined by the skill template is maintained. A user interface presentation is generated that presents a suggested demonstration to be performed by the user based on a respective progress value for each demonstration subtask.

SELF-LEARNING MANUFACTURING SCHEDULING FOR A FLEXIBLE MANUFACTURING SYSTEM AND DEVICE
20220374002 · 2022-11-24 ·

A method that is used for self-learning manufacturing scheduling for a flexible manufacturing system that is used to produce at least a product is provided. The manufacturing system consists of processing entities that are interconnected through handling entities. The manufacturing scheduling will be learned by a reinforcement learning system on a model of the flexible manufacturing system. The model represents at least a behavior and a decision making of the flexible manufacturing system. The model is realized as a petri net.

An order of the processing entities and the handling entities is interchangeable, and therefore, the whole arrangement is very flexible.

Industrial plant controller
11507070 · 2022-11-22 · ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an industrial plant controller that controls operation of an industrial plant. In one aspect, a method comprises generating training data using an industrial plant simulation model that simulates operation of the industrial plant. The industrial plant controller is trained by a reinforcement learning technique using the training data. The industrial plant controller is configured to process an input comprising a state vector characterizing a state of the industrial plant in accordance with a plurality of industrial plant controller parameters to generate an action selection policy output that defines a control action to be performed to control the operation of the industrial plant.

METHOD FOR SELF-LEARNING MANUFACTURING SCHEDULING FOR A FLEXIBLE MANUFACTURING SYSTEM BY USING A STATE MATRIX AND DEVICE
20220342398 · 2022-10-27 ·

The method for self-learning manufacturing scheduling for a flexible manufacturing system (FMS) with processing entities that are interconnected through handling entities is disclosed. The manufacturing scheduling is learned by a reinforcement learning system on a model of the flexible manufacturing system. The model represents at least the behavior and the decision making of the flexible manufacturing system, and the model is transformed in a state matrix to simulate the state of the flexible manufacturing system. A self-learning system for online scheduling and resource allocation is also provided. The system is trained in a simulation and learns the best decision from a defined set of actions for many every situation within an FMS. A decision may be made in near real-time during a production process and the system finds the optimal way through the FMS for every product using different optimization goals.

ANOMALY DETECTION IN LATENT SPACE REPRESENTATIONS OF ROBOT MOVEMENTS
20230078625 · 2023-03-16 ·

Provided is a process, including: obtaining, with a computer system, access to a specification indicating which regions of an embedding space are designated as anomalous relative to vectors in the embedding space characterizing past behavior of a first instance of a dynamical system; receiving, with the computer system, multi-channel input indicative of a state of a second instance of the dynamical system; and classifying, with the computer system, whether the state of the second instance of the dynamical system is anomalous by: encoding the multi-channel input into a vector in the embedding space; causing the specification to be applied to the vector; obtaining a result of applying the specification to the vector; and classifying whether the state of the second instance of the dynamical system is anomalous based on the result; and storing the classification in memory.