Patent classifications
G05B2219/41054
PRESSURE CONTROL IN A SUPPLY GRID
Methods, devices, and assemblies for controlling pressure in a supply grid are provided. The supply grid is suitable for supplying fluid to loads. The supply grid has first sensors for measuring the flow and/or the pressure of the fluid at first locations in the supply grid and a pump for pumping the fluid or a valve for controlling the flow of the fluid. The method includes: measuring the flow and/or pressure of the fluid at the first locations in the supply grid by the first sensors; predicting the pressure at the second location in the supply grid using a self-learning system based on the measured flows or pressures, wherein the self-learning system is trained to predict the pressure at a specified location in the supply grid; and actuating the pump or the valve at least also based on the pressure predicted by the trained system at the second location.
METHOD FOR CONTROLLING AN AUTOMATION PROCESS IN REAL TIME
A method for controlling an automation process in real time based on a change profile of at least one process variable, comprises determining a first change profile by a real-time-capable recognition method based on a non-linear optimization process taking into consideration at least one boundary condition of the process variable, determining a second change profile by a numerical algorithm based on the first change profile, including adapting a selected profile function to the first change profile by a numerical adaptation process and identifying the adapted profile function as a second change profile, checking whether the second change profile satisfies at least one secondary condition of the process variable, controlling the automation process based on the second change profile if the second change profile satisfies the secondary condition, and controlling the automation process based on a predetermined fallback profile if the second change profile does not satisfy the secondary condition.
MITIGATING REALITY GAP THROUGH SIMULATING COMPLIANT CONTROL AND/OR COMPLIANT CONTACT IN ROBOTIC SIMULATOR
Mitigating the reality gap through utilization of technique(s) that enable compliant robotic control and/or compliant robotic contact to be simulated effectively by a robotic simulator. The technique(s) can include, for example: (1) utilizing a compliant end effector model in simulated episodes of the robotic simulator; (2) using, during the simulated episodes, a soft constraint for a contact constraint of a simulated contact model of the robotic simulator; and/or (3) using proportional derivative (PD) control in generating joint control forces, for simulated joints of the simulated robot, during the simulated episodes. Implementations additionally or alternatively relate to determining parameter(s), for use in one or more of the techniques that enable effective simulation of compliant robotic control and/or compliant robotic contact.
DOMAIN ADAPTATION FOR ROBOTIC CONTROL USING SELF-SUPERVISED LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a policy neural network for use in controlling a real-world agent in a real-world environment. One of the methods includes training the policy neural network by optimizing a first task-specific objective that measures a performance of the policy neural network in controlling a simulated version of the real-world agent; and then training the policy neural network by jointly optimizing (i) a self-supervised objective that measures at least a performance of internal representations generated by the policy neural network on a self-supervised task performed on real-world data and (ii) a second task-specific objective that measures the performance of the policy neural network in controlling the simulated version of the real-world agent.
MACHINE LEARNING FOR INDUSTRIAL PROCESSES
Methods and systems for training a neural network in tandem with a policy gradient that incorporates domain knowledge with historical data. Process constraints are incorporated into training through an action mask. Evaluation of the trained network is provided by comparing the network's recommended actions with those of an operator. A decision tree is provided to explain a path from an input of process states, into the neural network, to the output of recommended actions.
SYSTEM AND METHOD FOR PREDICTING ROBOTIC TASKS WITH DEEP LEARNING
A computing system is provided for training one or more machine learning models to perform at least a portion of a robotic task of a physical robotic system by monitoring a model-based control algorithm associated with the physical robotic system perform at least a portion of the robotic task. One or more robotic task predictions may be defined, via the one or more machine learning models, based upon, at least in part, the training of the one or more machine learning models. The one or more robotic task predictions may be provided to the model-based control algorithm associated with the physical robotic system. The robotic task may be performed, via the model-based control algorithm associated with the robotic system, on the physical robotic system based upon, at least in part, the one or more robotic task predictions defined by the one or more machine learning models.
Machine learning apparatus, servo control apparatus, servo control system, and machine learning method
To perform reinforcement learning enabling to prevent complicated adjustment of coefficients of backlash compensation and backlash acceleration compensation. A machine learning apparatus includes a state information acquiring part for acquiring, from a servo control apparatus, state information including at least position deviation and a set of coefficients to be used by a backlash acceleration compensating part, by making the servo control apparatus execute a predetermined machining program, an action information output part for outputting action information including adjustment information on the set of coefficients included in the state information to the servo control apparatus, a reward output part for outputting a reward value in the reinforcement learning on the basis of the position deviation included in the state information, and a value function updating part for updating an action-value function on the basis of the reward value output by the reward output part, the state information and the action information.
Optical sensor optimization and system implementation with simplified layer structure
This disclosure includes methods for designing a simplified Integrated Computational Element (ICE) and for optimizing a selection of a combination of ICE designs. A method for fabricating a simplified ICE having one or more film layers includes predicting an optimal thickness of each of the one or more film layers of the simplified ICE using a neural network. A method for recalibrating the fabricated ICE elements for system implementation is also disclosed. The disclosure also includes the simplified ICE designed by and the ICE combination selected by the disclosed methods. The disclosure also includes an information handling system with machine-readable instructions to perform the methods disclosed herein.
TOOTH CONTACT POSITION ADJUSTMENT AMOUNT ESTIMATION DEVICE, MACHINE LEARNING DEVICE, AND ROBOT SYSTEM
A tooth contact position adjustment amount estimation device that performs processing with respect to estimating a tooth contact position adjustment amount for dimensional data of parts constituting a power transmission mechanism according to the present invention, comprising: a machine learning device that performs processing with respect to estimating the tooth contact position adjustment amount for the dimensional data of parts constituting the power transmission mechanism, wherein the machine learning device observes part dimensional data, which is the dimensional data of parts constituting the power transmission mechanism, as a state variable indicating a current state of an environment, and performs processing with respect to estimating the tooth contact position adjustment amount for the dimensional data of parts constituting the power transmission mechanism in assembling the power transmission mechanism by performing processing with respect to machine learning based on the observed state variable.
Training Spectrum Generation for Machine Learning System for Spectrographic Monitoring
A method of generating training spectra for training of a neural network includes measuring a first plurality of training spectra from one or more sample substrates, measuring a characterizing value for each training spectra of the plurality of training spectra to generate a plurality of characterizing values with each training spectrum having an associated characterizing value, measuring a plurality of dummy spectra during processing of one or more dummy substrates, and generating a second plurality of training spectra by combining the first plurality of training spectra and the plurality of dummy spectra, there being a greater number of spectra in the second plurality of training spectra than in the first plurality of training spectra. Each training spectrum of the second plurality of training spectra having an associated characterizing value.