G06N3/043

Multi-time Scale Model Predictive Control of Wastewater Treatment Process
20230004780 · 2023-01-05 ·

A multi-time scale model predictive control method for wastewater treatment process is designed to control the dissolved oxygen concentration and nitrate nitrogen concentration in different time scales to ensure that the effluent quality meets the standard. In view of the difference of time scales in wastewater treatment process caused by different sampling periods of dissolved oxygen concentration and nitrate nitrogen concentration, prediction models with different time scales are firstly designed to unify the prediction outputs to the fast time scale. Then, the gradient descent algorithm is used to solve the optimal solution with fast time scale to control the wastewater treatment system. It not only conforms to the operation characteristics of wastewater treatment process, but also solves the problem of poor operation performance of multiobjective model predictive control caused by different time scales. The experimental results show that the multi-time scale model predictive control method can achieve accurate on-line control of dissolved oxygen concentration and nitrate nitrogen concentration with fast time scales.

Control method based on adaptive neural network model for dissolved oxygen of aeration system

A control method based on an adaptive neural network model for dissolved oxygen of an aeration system includes: obtaining related water quality monitoring data of a sewage treatment plant, and performing data preprocessing on the related water quality monitoring data; performing principal component analysis on the preprocessed related water quality monitoring data and a dissolved oxygen concentration of the aeration system through a principal component analysis method, and determining a water quality parameter with a highest rate of contribution to a principal component; taking the water quality parameter with the highest rate of contribution to the principal component, and predicting a dissolved oxygen concentration of the aeration system; and optimizing a dissolved oxygen predictive value obtained by means of the adaptive neural network model to obtain an optimal regulation value, and performing online regulation on a fuzzy control system of the adaptive neural network model.

Method of and server for training a machine learning algorithm for estimating uncertainty of a sequence of models

There is provided a method and server for estimating an uncertainty parameter of a sequence of computer-implemented models comprising at least one machine learning algorithm (MLA). A set of labelled digital documents is received, which is to be processed by the sequence of models. For a given model of the sequence of models, at least one of a respective set of input features, a respective set of model-specific features and a respective set of output features are received. The set of predictions output by the sequence of models is received. A second MLA is trained to estimate uncertainty of the sequence of models based on the set of labelled digital documents, and the at least one of the respective set of input features, the respective set of model-specific features, the respective set of output features, and the set of predictions.

Techniques to add smart device information to machine learning for increased context

Disclosed are an apparatus, a system and a non-transitory computer readable medium that implement processing circuitry that receives non-dialog information from a smart device and determines a data type of data in the received non-dialog information. Based on the determined data type, the processing circuitry transforms the received first data using an input from a machine learning algorithm into transformed data. The transformed data is standardized data that is palatable for machine learning algorithms such as those used implemented as chatbots. The standardized transformed data is useful for training multiple different chatbot systems and enables the typically underutilized non-dialog information to be used to as training input to improve context and conversation flow between a chatbot and a user.

SYSTEMS AND METHODS OF ASSIGNING A CLASSIFICATION TO A STATE OR CONDITION OF AN EVALUATION TARGET
20230018960 · 2023-01-19 ·

A method includes obtaining data representative of a state or condition of an evaluation target. The method also includes providing first input based on the data to a trained classifier to generate a first result. The method further includes providing second input based on the data to an adaptive neuro-fuzzy inference system to generate a second result. The method also includes assigning a classification to the state or condition of the evaluation target based on the first result and the second result.

Automated Job Flow Cancellation for Multiple Task Routine Instance Errors in Many Task Computing

An apparatus including a processor to: within a kill container, in response to a set of error messages indicative of errors in executing multiple instances of a task routine to perform a task of a job flow with multiple data object blocks of a data object, and in response to the quantity of error messages reaching a threshold, output a kill tasks request message that identifies the job flow; within a task container, in response to the kill tasks request message, cease execution of the task routine and output a task cancelation message that identifies the task and the job flow; and within a performance container, in response to he task cancelation message, output a job cancelation message to cause the transmission of an indication of cancelation of the job flow, via a network, and to a requesting device that requested the performance of the job flow.

Method of multi-sensor data fusion
11552778 · 2023-01-10 · ·

A method of multi-sensor data fusion includes determining a plurality of first data sets using a plurality of sensors, each of the first data sets being associated with a respective one of a plurality of sensor coordinate systems, and each of the sensor coordinate systems being defined in dependence of a respective one of a plurality of mounting positions for the sensors; transforming the first data sets into a plurality of second data sets using a transformation rule, each of the second data sets being associated with a unified coordinate system, the unified coordinate system being defined in dependence of at least one predetermined reference point; and determining at least one fused data set by fusing the second data sets.

Method and apparatus for producing a machine learning system for malware prediction in low complexity sensor networks
11550908 · 2023-01-10 ·

One embodiment of this invention describes a method and apparatus for the use of Machine Learning to efficiently detect, identify, prevent, and predict cyber-attacks on Low Power and Low Complexity Sensor 100 (FIG. 1) networks that have low data transmission requirements, something that all current Machine Learning techniques are unable to accomplish due to numerous restrictions when applied to Low Power and Low Complexity Sensors. Low Power and Low Complexity Sensors are frequently found in various Internet of Things (IOT) network architectures. The IOT is a network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and connectivity which enables them to connect and exchange data, providing more direct integration of the physical world into computer-based systems. However, this should not restrict the applicability of any potential embodiment of this invention as described in this patent application. A further understanding of the nature and the advantages of the particular embodiments disclosed herein may be realized by referencing the remaining portions to the specification.

Equipment utilizing human recognition and method for utilizing the same

A method for utilizing human recognition and a method utilizing the same are provided. The method for utilizing human recognition includes updating a moving image database to include information about a moving image in which a cluster subject appears, the information being extracted based on clustering using a face feature; receiving a search condition; and detecting moving image information using the database. According to the present disclosure, a skeleton can be analyzed and a face can be recognized using an artificial intelligence (AI) model performing deep learning through a fifth generation (5G) network, and using the analysis result, a photographing composition can be determined, and moving image information can be constructed at an edge.

Fuzzy cyber detection pattern matching

Mechanisms for identifying a pattern of computing resource activity of interest, in activity data characterizing activities of computer system elements, are provided. A temporal graph of the activity data is generated and a filter is applied to the temporal graph to generate one or more first vector representations, each characterizing nodes and edges within a moving window defined by the filter. The filter is applied to a pattern graph representing a pattern of entities and events indicative of the pattern of interest, to generate a second vector representation. The second vector representation is compared to the one or more first vector representations to identify one or more nearby vectors, and one or more corresponding subgraph instances are output to an intelligence console computing system as inexact matches of the temporal graph.