Patent classifications
G06N3/105
ENCRYPTION METHOD AND SYSTEM FOR XENOMORPHIC CRYPTOGRAPHY
The present invention relates to a method and system of cybersecurity; and particularly relates to an encryption method and system on the basis of cognitive computing for xenomorphic cryptography or unusual form of cryptography; said method comprises generating a Functional Neural Network or KeyNode (KN) of the system by programming a chain of multiple nodes also called Artificial Mirror Neurons (AMN) based on captured information of reaction time and emotional response to a simple task; racing the nodes in the Functional Neural Network or KeyNode (KN) as an encryption device or cipher for the time of use; generating a password at the time of use based on the sum of intrinsic values of the nodes in the racing network at this time and adopting the generated password for authentication. The present invention can be applied to secure online and mobile communication especially at the dawn of 5G with generalization of open API lifestyle platforms so as to allow real-time identification for digital cryptocurrency payments and other public distributed ledger technology (DLT) mechanisms.
System and method for compact and efficient sparse neural networks
A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).
METHOD FOR ASSISTING LAUNCH OF MACHINE LEARNING MODEL
A method for assisting launch of a machine learning model includes: acquiring a model file from offline training of the machine learning model; determining a training data table used in a model training process by analyzing the model file; creating in an online database an online data table having consistent table information with the training data table; and importing at least a part of offline data into the online data table.
Intelligent framework updater to incorporate framework changes into data analysis models
A computer system adapts a model analyzing data. Information sources are analyzed to determine one or more changes for a computerized model employed for analyzing data. One or more current projects each using an implementation of the computerized model with at least one of the determined changes are identified. The implementations are compared to the employed computerized model to determine differences. One or more adaptations for the employed computerized model are determined in response to the determined differences satisfying a threshold, wherein the one or more adaptations for the employed computerized model are based on the determined changes in the corresponding implementation of the computerized model. At least one adaption is installed into a platform hosting the employed model for modification of the employed model. Embodiments of the present invention further include a method and program product for adapting a model analyzing data in substantially the same manner described above.
Method, apparatus, and computer program product for machine learning model lifecycle management
Computing systems, computing apparatuses, computing methods, and computer program products are disclosed for machine learning model lifecycle management. An example computing method includes receiving a machine learning model selection, a machine learning model experiment creation input, a machine learning model experiment run type, and a machine learning model input data path. The example method further includes determining a machine learning model execution engine based on the machine learning model experiment creation input and the machine learning model experiment run type. The example method further includes retrieving input data based on the machine learning model input data path. The example method further includes executing a machine learning model experiment based on the machine learning model execution engine, machine learning model experiment creation input, and the input data. The example method further includes generating one or more machine learning model scores based on the machine learning model experiment.
Artificial intelligence workflow builder
In some examples, a method includes receiving an artificial intelligence (AI) system scenario definition file from a user, parsing the definition file and building an application workflow graph for the AI system, and mapping the application workflow graph to an execution pipeline. In some examples, the method further includes automatically generating, from the workflow graph, application executable binary code implementing the AI system, and outputting the application executable binary code to the user. In some examples, the execution pipeline includes one or more building blocks, and the method then further includes collecting running performance of each of the building blocks of the execution pipeline in a runtime environment.
DYNAMIC PLACEMENT OF COMPUTATION SUB-GRAPHS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for assigning operations of a computational graph to a plurality of computing devices are disclosed. Data characterizing a computational graph is obtained. Context information for a computational environment in which to perform the operations of the computational graph is received. A model input is generated, which includes at least the context information and the data characterizing the computational graph. The model input is processed using the machine learning model to generate an output defining placement assignments of the operations of the computational graph to the plurality of computing devices. The operations of the computational graph are assigned to the plurality of computing device according to the defined placement assignments.
CHATBOT FOR DEFINING A MACHINE LEARNING (ML) SOLUTION
The present disclosure relates to systems and methods for an intelligent assistant (e.g., a chatbot) that can be used to enable a user to generate a machine learning system. Techniques can be used to automatically generate a machine learning system to assist a user. In some cases, the user may not be a software developer and may have little or no experience in either machine learning techniques or software programming. In some embodiments, a user can interact with an intelligent assistant. The interaction can be aural, textual, or through a graphical user interface. The chatbot can translate natural language inputs into a structural representation of a machine learning solution using an ontology. In this way, a user can work with artificial intelligence without being a data scientist to develop, train, refine, and compile machine learning models as stand-alone executable code.
METHOD FOR IMPLEMENTING A HARDWARE ACCELERATOR OF A NEURAL NETWORK
The invention relates to a method for implementing a hardware accelerator for a neural network, comprising: a step of interpreting an algorithm of the neural network in binary format, converting the neural network algorithm in binary format into a graphical representation, selecting building blocks from a library of predetermined building blocks, creating an organization of the selected building blocks, configuring internal parameters of the building blocks of the organization so that the organization of the selected and configured building blocks corresponds to said graphical representation; a step of determining an initial set of weights for the neural network; a step of completely synthesizing the organization of the selected and configured building blocks on the one hand, in a preselected FPGA programmable logic circuit (41) in a hardware accelerator (42) for the neural network, and on the other hand in a software driver for this hardware accelerator (42), this hardware accelerator (42) being specifically dedicated to the neural network so as to represent the entire architecture of the neural network without needing access to a memory (44) external to the FPGA programmable logic circuit (41) when passing from one layer to another layer of the neural network, a step of loading (48) the initial set of weights for the neural network into the hardware accelerator (42).
METHODS AND SYSTEMS FOR INTEGRATING MODEL DEVELOPMENT CONTROL SYSTEMS AND MODEL VALIDATION PLATFORMS
Methods and systems are described herein for integrating model development control systems and model validation platforms. For example, the methods and systems discussed herein recite the creation and use of a model validation platform. This platform operates outside of the environment of the independently validated models as well as the native platform into which the independently validated models may be incorporated. The model validation platform may itself include a model that systematically validates other independently validated models. The model validation platform may then provide users substantive analysis of a model and its performance through one or more user interface tools such as side-by-side comparisons, recommended adjustments, and/or a plurality of adjustable model attributes for use in validating an inputted model.