G06N3/048

Method and apparatus providing a trained signal classification neural network

A method for providing a training data set used for training a signal classification neural network is provided. The method includes generating at least one first virtual waveform primitive comprising a predetermined signal level and at least one second virtual waveform primitive comprising a signal edge. The training data set is formed and comprises a predetermined number of generated virtual waveform primitives including first virtual waveform primitives and second virtual waveform primitives. Each virtual waveform primitive comprises a sequence of time and amplitude discrete values. The training data set is used for training the signal classification neural network.

Complex-valued neural network with learnable non-linearities in medical imaging

For machine training and application of a trained complex-valued machine learning model, an activation function of the machine learning model, such as a neural network, includes a learnable parameter that is complex or defined in a complex domain with two dimensions, such as real and imaginary or magnitude and phase dimensions. The complex learnable parameter is trained for any of various applications, such as MR fingerprinting, other medical imaging, or non-medical uses.

System and method for predicting fall armyworm using weather and spatial dynamics

A dynamic graph includes a plurality of nodes and edges at a plurality of time steps; each node corresponds to a geographic location in a first area where pest infestation information is available for a subset of locations. Each edge connects two of the nodes which are geographically proximate, has a direction based on wind direction, and has a weight based on relative wind speed. Assign node features based on weather data as well as labels corresponding to pest infestation severity. Train a graph convolutional network on the dynamic graph. Based on predicted future weather conditions for a second area different than the first area, use the trained graph convolutional network to predict, via inductive learning, pest infestation severity for future times for a new set of nodes corresponding to new geographic locations in the second area for which no pest infestation information is available.

Learning apparatus, generation apparatus, classification apparatus, learning method, and non-transitory computer readable storage medium
11580362 · 2023-02-14 · ·

According to one aspect of an embodiment a learning apparatus includes a first acquiring unit that acquires first output information that is output by an output layer when predetermined input information is input to a model that includes an input layer, a plurality of intermediate layers, and the output layer. The learning apparatus includes a second acquiring unit that acquires intermediate output information that is based on pieces of intermediate information that are output by the plurality of intermediate layers when the input information is input to the model. The learning apparatus includes a learning unit that learns the model based on the first output information and the intermediate output information.

Methods, systems, and apparatuses for torque control utilizing roots of pseudo neural network

In various embodiments, methods, systems, and vehicle apparatuses are provided. A method for implementing torque control using a Neural Network (NN) for a torque prediction model to receive a set of measured vehicle operating inputs associated with torque prediction; substituting a set of multiple independent variables into the torque prediction model so that the NN is then taking the form of a simplified pseudo-NN that contains a reduced variable set of one independent variable; processing, the set of measured vehicle operating inputs by the pseudo-NN based on the NN prediction model by using only one independent variable in a pseudo-NN's simplified mathematical expression; and solving for at least one root of the pseudo-NN's simplified mathematical expression by obtaining a root value without having to rely on an inversion operation of a mathematical expression that consists of an entire set of independent variables.

Medical image segmentation method based on U-Net

A medical image segmentation method based on a U-Net, including: sending real segmentation image and original image to a generative adversarial network for data enhancement to generate a composite image with a label; then putting the composite image into original data set to obtain an expanded data set, and sending the expanded data set to improved multi-feature fusion segmentation network for training. A Dilated Convolution Module is added between the shallow and deep feature skip connections of the segmentation network to obtain receptive fields with different sizes, which enhances the fusion of detail information and deep semantics, improves the adaptability to the size of the segmentation target, and improves the medical image segmentation accuracy. The over-fitting problem that occurs when training the segmentation network is alleviated by using the expanded data set of the generative adversarial network.

Machine vision as input to a CMP process control algorithm

During chemical mechanical polishing of a substrate, a signal value that depends on a thickness of a layer in a measurement spot on a substrate undergoing polishing is determined by a first in-situ monitoring system. An image of at least the measurement spot of the substrate is generated by a second in-situ imaging system. Machine vision processing, e.g., a convolutional neural network, is used to determine a characterizing value for the measurement spot based on the image. Then a measurement value is calculated based on both the characterizing value and the signal value.

Tensor dropout using a mask having a different ordering than the tensor

A method for selectively dropping out feature elements from a tensor in a neural network is disclosed. The method includes receiving a first tensor from a first layer of a neural network. The first tensor includes multiple feature elements arranged in a first order. A compressed mask for the first tensor is obtained. The compressed mask includes single-bit mask elements respectively corresponding to the multiple feature elements of the first tensor and has a second order that is different than the first order of their corresponding feature elements in the first tensor. Feature elements from the first tensor are selectively dropped out based on the compressed mask to form a second tensor which is propagated to a second layer of the neural network.

Generating approximations of cardiograms from different source configurations
11576624 · 2023-02-14 · ·

Systems are provided for generating data representing electromagnetic states of a heart for medical, scientific, research, and/or engineering purposes. The systems generate the data based on source configurations such as dimensions of, and scar or fibrosis or pro-arrhythmic substrate location within, a heart and a computational model of the electromagnetic output of the heart. The systems may dynamically generate the source configurations to provide representative source configurations that may be found in a population. For each source configuration of the electromagnetic source, the systems run a simulation of the functioning of the heart to generate modeled electromagnetic output (e.g., an electromagnetic mesh for each simulation step with a voltage at each point of the electromagnetic mesh) for that source configuration. The systems may generate a cardiogram for each source configuration from the modeled electromagnetic output of that source configuration for use in predicting the source location of an arrhythmia.

Sensor fusion

According to one aspect, a long short-term memory (LSTM) cell for sensor fusion may include M number of forget gates, M number of input gates, and M number output gates. The M number of forget gates may receive M sets of sensor encoding data from M number of sensors and a shared hidden state. The M number of input gates may receive the corresponding M sets of sensor data and the shared hidden state. The M number output gates may generate M partial shared cell state outputs and M partial shared hidden state outputs based on the M sets of sensor encoding data, the shared hidden state, and a shared cell state.