Patent classifications
G06N3/04
INTERNET-OF-THINGS EDGE SERVICES FOR DEVICE FAULT DETECTION BASED ON CURRENT SIGNALS
Methods, systems, and computer-readable storage media for receiving, by an anomalous operation detection service, current signal data representing a driving current applied to a device over a time period, processing, by an anomalous operation detection service, the current signal data through a deep neural network (DNN) module, a frequency spectrum analysis (FSA) module, and a time series classifier (TSC) module to provide a set of indications, each indication in the set of indications indicating one of normal operation of the device and anomalous operation of the device, processing, by an anomalous operation detection service, the set of indications through a voting gate to provide an output indication, the output indication indicating one of normal operation of the device and anomalous operation of the device, and selectively transmitting one or more of an alert and a message based on the output indication.
System and Method For Regularized Evolutionary Population-Based Training
The present invention relates to metalearning of deep neural network (DNN) architectures and hyperparameters. Precisely, the present system and method utilizes Evolutionary Population-Based Based Training (EPBT) that interleaves the training of a DNN's weights with the metalearning of loss functions. They are parameterized using multivariate Taylor expansions that EPBT can directly optimize. Further, EPBT based system and method uses a quality-diversity heuristic called Novelty Pulsation as well as knowledge distillation to prevent overfitting during training. The discovered hyperparameters adapt to the training process and serve to regularize the learning task by discouraging overfitting to the labels. EPBT thus demonstrates a practical instantiation of regularization metalearning based on simultaneous training.
System and Method For Regularized Evolutionary Population-Based Training
The present invention relates to metalearning of deep neural network (DNN) architectures and hyperparameters. Precisely, the present system and method utilizes Evolutionary Population-Based Based Training (EPBT) that interleaves the training of a DNN's weights with the metalearning of loss functions. They are parameterized using multivariate Taylor expansions that EPBT can directly optimize. Further, EPBT based system and method uses a quality-diversity heuristic called Novelty Pulsation as well as knowledge distillation to prevent overfitting during training. The discovered hyperparameters adapt to the training process and serve to regularize the learning task by discouraging overfitting to the labels. EPBT thus demonstrates a practical instantiation of regularization metalearning based on simultaneous training.
System and Method for Distributed Data Processing
A distributed data processing system includes a processing center or algorithm persistence system (“APS”), a series of remote caching nodes in electronic communication with the APS, and a series of remote computing or processing nodes in electronic communication with the remote caching nodes. Each remote caching node is mounted to a top surface of a mobile vehicle and includes a data transmitter/receiver (transceiver), computer hardware and software to operate the caching node, memory to transmit or transfer data from the APS to the remote processing nodes. The remote processing nodes include a series of electricity generating solar panels, a series of electronic data processing chips, electronic data memory, an electronic date transmitter/receiver (transceiver), and a motion sensor. The series of electronic data processing chips are preferably a tensor processing unit (TPU), which is an AI accelerator application-specific integrated circuit (ASIC) developed specifically for neural network machine learning.
System and Method for Distributed Data Processing
A distributed data processing system includes a processing center or algorithm persistence system (“APS”), a series of remote caching nodes in electronic communication with the APS, and a series of remote computing or processing nodes in electronic communication with the remote caching nodes. Each remote caching node is mounted to a top surface of a mobile vehicle and includes a data transmitter/receiver (transceiver), computer hardware and software to operate the caching node, memory to transmit or transfer data from the APS to the remote processing nodes. The remote processing nodes include a series of electricity generating solar panels, a series of electronic data processing chips, electronic data memory, an electronic date transmitter/receiver (transceiver), and a motion sensor. The series of electronic data processing chips are preferably a tensor processing unit (TPU), which is an AI accelerator application-specific integrated circuit (ASIC) developed specifically for neural network machine learning.
MACHINE LEARNING TECHNIQUES FOR EFFICIENT DATA PATTERN RECOGNITION ACROSS STRUCTURED DATA OBJECTS
Various embodiments of the present invention provide methods, apparatus, systems, computing devices, computing entities, and/or the like for performing predictive data analysis with respect to structured data objects. Certain embodiments of the present invention utilize systems, methods, and computer program products that perform predictive data analysis with respect to structured data objects by utilizing at least one of cross-table data similarity score generation machine learning models and unsupervised anomalous table row detection machine learning models.
MACHINE LEARNING TECHNIQUES FOR EFFICIENT DATA PATTERN RECOGNITION ACROSS STRUCTURED DATA OBJECTS
Various embodiments of the present invention provide methods, apparatus, systems, computing devices, computing entities, and/or the like for performing predictive data analysis with respect to structured data objects. Certain embodiments of the present invention utilize systems, methods, and computer program products that perform predictive data analysis with respect to structured data objects by utilizing at least one of cross-table data similarity score generation machine learning models and unsupervised anomalous table row detection machine learning models.
SELF-ORGANIZING GENERALIZATION HIERARCHY WITH BINDINGS OF PROPOSITIONS
A memory for storing a directed acyclic graph (DAG) for access by an application being executed by one or more processors of a computing device is described. The DAG includes a plurality of nodes, wherein each node represents a data point within the DAG. The DAG further includes a plurality of directional edges. Each directional edge connects a pair of the nodes and represents a covering-covered relationship between two nodes. Each node comprises a subgraph consisting of the respective node and all other nodes reachable via a covering path that comprises a sequence of covering and covered nodes. Each node comprises a set of node parameters including at least an identifier and an address range. Each node and the legal address specify a cover path. Utilizing DAG Path Addressing with bindings the memory can be organized to store a generalization hierarchy of logical propositions.
SELF-ORGANIZING GENERALIZATION HIERARCHY WITH BINDINGS OF PROPOSITIONS
A memory for storing a directed acyclic graph (DAG) for access by an application being executed by one or more processors of a computing device is described. The DAG includes a plurality of nodes, wherein each node represents a data point within the DAG. The DAG further includes a plurality of directional edges. Each directional edge connects a pair of the nodes and represents a covering-covered relationship between two nodes. Each node comprises a subgraph consisting of the respective node and all other nodes reachable via a covering path that comprises a sequence of covering and covered nodes. Each node comprises a set of node parameters including at least an identifier and an address range. Each node and the legal address specify a cover path. Utilizing DAG Path Addressing with bindings the memory can be organized to store a generalization hierarchy of logical propositions.
METHODS OF ENCODING AND DECODING, ENCODER AND DECODER PERFORMING THE METHODS
Provided is an encoding method according to various example embodiments and an encoder performing the method. The encoding method includes outputting a linear prediction(LP) coefficients bitstream and a residual signal by performing a linear prediction analysis on an input signal, outputting a first latent signal obtained by encoding a periodic component of the residual signal, using a first neural network module, outputting a first bitstream obtained by quantizing the first latent signal, using a quantization module, outputting a second latent signal obtained by encoding an aperiodic component of the residual signal, using the first neural network module, and outputting a second bitstream obtained by quantizing the second latent signal, using the quantization module, wherein the aperiodic component of the residual signal is calculated based on a periodic component of the residual signal decoded from the quantized first latent signal output by de-quantizing the first bitstream.