Patent classifications
G06F11/1476
Reducing the cost of n modular redundancy for neural networks
An N modular redundancy method, system, and computer program product include a computer-implemented N modular redundancy method for neural networks, the method including selectively replicating the neural network by employing one of checker neural networks and selective N modular redundancy (N-MR) applied only to critical computations.
INFERENCE CALCULATION FOR NEURAL NETWORKS WITH PROTECTION AGAINST MEMORY ERRORS
A method for operating a hardware platform for the inference calculation of a layered neural network. In the method: a first portion of input data which are required for the inference calculation of a first layer of the neural network and redundancy information relating to the input data are read in from an external working memory into an internal working memory of the computing unit; the integrity of the input data is checked based on the redundancy information; in response to the input data here being identified as error-free, the computing unit carries out at least part of the first-layer inference calculation for the input data to obtain a work result; redundancy information for the work result is determined, based which the integrity of the work result can be verified; the work result and the redundancy information are written to the external working memory.
Neural network quantization parameter determination method and related products
The present disclosure relates to a neural network quantization parameter determination method and related products. A board card in the related products includes a memory device, an interface device, a control device, and an artificial intelligence chip, in which the artificial intelligence chip is connected with the memory device, the control device, and the interface device respectively. The memory device is configured to store data, and the interface device is configured to transmit data between the artificial intelligence chip and an external device. The control device is configured to monitor the state of the artificial intelligence chip. The board card can be used to perform an artificial intelligence computation.
System and method for automatically generating neural networks for anomaly detection in log data from distributed systems
A system and method for automatically generating recurrent neural networks for log anomaly detection uses a controller recurrent neural network that generates an output set of hyperparameters when an input set of controller parameters is applied to the controller recurrent neural network. The output set of hyperparameters is applied to a target recurrent neural network to produce a child recurrent neural network with an architecture that is defined by the output set of hyperparameters. The child recurrent neural network is then trained, and a log classification accuracy of the child recurrent neural network is computed. Using the log classification accuracy, at least one of the controller parameters used to generate the child recurrent neural network is adjusted to produce a different input set of controller parameters to be applied to the controller recurrent neural network so that a different child recurrent neural network for log anomaly detection can be generated.
FALLBACK ARTIFICIAL INTELLIGENCE SYSTEM FOR REDUNDANCY DURING SYSTEM FAILOVER
There are provided systems and methods for a fallback artificial intelligence (AI) system for redundancy during system failover. A service provider may provide AI systems for automated decision-making, such as for risk analysis, marketing, and the like. An AI system may operate in a production computing environment in order to provide AI decision-making based on input data, for example, by providing an output decision. In order to provide redundancy to the production AI system, the service provider may train a fallback AI system using the input/output data pairs from the production AI system. This may utilize a deep neural network and a continual learning trainer. Thereafter, when a failover condition is detected for the production AI system, the service provider may switch from the production AI system to the fallback AI system, which may provide decision-making operations during failure of within the production computing environment.
ROBUSTNESS SETTING DEVICE, ROBUSTNESS SETTING METHOD, STORAGE MEDIUM STORING ROBUSTNESS SETTING PROGRAM, ROBUSTNESS EVALUATION DEVICE, ROBUSTNESS EVALUATION METHOD, STORAGE MEDIUM STORING ROBUSTNESS EVALUATION PROGRAM, COMPUTATION DEVICE, AND STORAGE MEDIUM STORING PROGRAM
A robustness setting device provided with robustness specifying means for specifying a robustness level required in a computation device using a trained model against an adversarial sample that is an input signal to which a perturbation has been added in order to induce an erroneous determination by the trained model; and level determination means for determining a noise removal level for the input signal based on the robustness level.
TEMPERATURE PREDICTION SYSTEM AND METHOD FOR PREDICTING A TEMPERATURE OF A CHIP OF A PCIE CARD OF A SERVER
To predict a temperature of a chip of a PCIe card of a server, use a gated recurrent unit of a recurrent neural network to define a temperature prediction model for the chip, collect training data of the temperature prediction model according to mutual response changes of control variables, use the training data to train the temperature prediction model to obtain a training result close to a measured temperature of the chip and evaluate the training result to obtain features that best reflect the temperature change of the chip, perform an error analysis on the training result to obtain a set of key features from the features, form a temperature predictor according to the set of key features and the temperature prediction model, and generate a predicted temperature of the chip by the temperature predictor.
Optimized Neural Network Data Organization
In some implementations, the present disclosure relates to a method. The method includes obtaining a set of weights for a neural network comprising a plurality of nodes and a plurality of connections between the plurality of nodes. The method also includes identifying a first subset of weights and a second subset of weights based on the set of weights. The first subset of weights comprises weights that used by the neural network. The second subset of weights comprises weights that are prunable. The method further includes storing the first subset of weights in a first portion of a memory. A first error correction code is used for the first portion of the memory. The method further includes storing the second subset of weights in a second portion of the memory. A second error correction code is used for the second portion of the memory. The second error correction code is weaker than the first error correction code.
Computer system integrity through a combination of certifiable and qualifiable software
A method of improving integrity of a computer system includes executing certifiable and qualifiable software applications. The certifiable software application is composed of static program instructions executed sequentially to process input data to produce an output, and the qualifiable software application uses a model iteratively built using a machine learning algorithm to process the input data to produce a corresponding output. The certifiable software application is certifiable for the computer system according to a certification standard, and the qualifiable software application being non-certifiable for the computer system according to the certification standard. The method also includes cross-checking the output by comparison with the corresponding output to verify the output, and thereby improve integrity of the computer system. And the method includes generating an alert that the output is unverified when the comparison indicates that the output differs from the corresponding output by more than a threshold.
Weights safety mechanism in an artificial neural network processor
Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The mechanisms address ANN system level safety in situ, as a system level strategy tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts that function to detect and promptly flag and report an error with some mechanisms capable of correction as well. The safety mechanisms cover data stream fault detection, software defined redundant allocation, cluster interlayer safety, cluster intralayer safety, layer control unit (LCU) instruction addressing, weights storage safety, and neural network intermediate results storage safety.