Patent classifications
G05B2219/41054
Training Spectrum Generation for Machine Learning System for Spectrographic Monitoring
A method of generating training spectra for training of a neural network includes generating a plurality of theoretically generated initial spectra from an optical model, sending the plurality of theoretically generated initial spectra to a feedforward neural network to generate a plurality of modified theoretically generated spectra, sending an output of the feedforward neural network and empirically collected spectra to a discriminatory convolutional neural network, determining that the discriminatory convolutional neural network does not discriminate between the modified theoretically generated spectra and empirically collected spectra, and thereafter, generating a plurality of training spectra from the feedforward neural network.
Pressure control in a supply grid
Methods, devices, and assemblies for controlling pressure in a supply grid are provided. The supply grid is suitable for supplying fluid to loads. The supply grid has first sensors for measuring the flow and/or the pressure of the fluid at first locations in the supply grid and a pump for pumping the fluid or a valve for controlling the flow of the fluid. The method includes: measuring the flow and/or pressure of the fluid at the first locations in the supply grid by the first sensors; predicting the pressure at the second location in the supply grid using a self-learning system based on the measured flows or pressures, wherein the self-learning system is trained to predict the pressure at a specified location in the supply grid; and actuating the pump or the valve at least also based on the pressure predicted by the trained system at the second location.
Training spectrum generation for machine learning system for spectrographic monitoring
A method of generating training spectra for training of a neural network includes measuring a first plurality of training spectra from one or more sample substrates, measuring a characterizing value for each training spectra of the plurality of training spectra to generate a plurality of characterizing values with each training spectrum having an associated characterizing value, measuring a plurality of dummy spectra during processing of one or more dummy substrates, and generating a second plurality of training spectra by combining the first plurality of training spectra and the plurality of dummy spectra, there being a greater number of spectra in the second plurality of training spectra than in the first plurality of training spectra. Each training spectrum of the second plurality of training spectra having an associated characterizing value.
Document storage system
A document storage system includes a document digitizing device making an electronic copy of a document. A Liquid neural network (LNN) annotates the video files with document terms producing an annotated document. A semantic analyzer generates a relationship between document terms. A blockchain based data transaction forwards the annotated document to a server. A photonic spiking neural network (PSNN) server uploads and downloads annotated documents from the server. The interface will be changed automatically based on the individual's physical characteristics.
MACHINE LEARNING APPARATUS, SERVO CONTROL APPARATUS, SERVO CONTROL SYSTEM, AND MACHINE LEARNING METHOD
To perform reinforcement learning enabling to prevent complicated adjustment of coefficients of backlash compensation and backlash acceleration compensation. A machine learning apparatus includes a state information acquiring part for acquiring, from a servo control apparatus, state information including at least position deviation and a set of coefficients to be used by a backlash acceleration compensating part, by making the servo control apparatus execute a predetermined machining program, an action information output part for outputting action information including adjustment information on the set of coefficients included in the state information to the servo control apparatus, a reward output part for outputting a reward value in the reinforcement learning on the basis of the position deviation included in the state information, and a value function updating part for updating an action-value function on the basis of the reward value output by the reward output part, the state information and the action information.
Intelligent control with hierarchical stacked neural networks
A method of processing information is provided. The method involves receiving a message; processing the message with a trained artificial neural network based processor, having at least one set of outputs which represent information in a non-arbitrary organization of actions based on an architecture of the artificial neural network based processor and the training; representing as a noise vector at least one data pattern in the message which is incompletely represented in the non-arbitrary organization of actions; and analyzing the noise vector distinctly from the trained artificial neural network.
Intelligent control with hierarchical stacked neural networks
A method of processing information is provided. The method involves receiving a message; processing the message with a trained artificial neural network based processor, having at least one set of outputs which represent information in a non-arbitrary organization of actions based on an architecture of the artificial neural network based processor and the training; representing as a noise vector at least one data pattern in the message which is incompletely represented in the non-arbitrary organization of actions; analyzing the noise vector distinctly from the trained artificial neural network; searching at least one database; and generating an output in dependence on said analyzing and said searching.
Domain adaptation for robotic control using self-supervised learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a policy neural network for use in controlling a real-world agent in a real-world environment. One of the methods includes training the policy neural network by optimizing a first task-specific objective that measures a performance of the policy neural network in controlling a simulated version of the real-world agent; and then training the policy neural network by jointly optimizing (i) a self-supervised objective that measures at least a performance of internal representations generated by the policy neural network on a self-supervised task performed on real-world data and (ii) a second task-specific objective that measures the performance of the policy neural network in controlling the simulated version of the real-world agent.