Patent classifications
G06N3/088
System and method for iterative classification using neurophysiological signals
A method of training an image classification neural network comprises: presenting a first plurality of images to an observer as a visual stimulus, while collecting neurophysiological signals from a brain of the observer; processing the neurophysiological signals to identify a neurophysiological event indicative of a detection of a target by the observer in at least one image of the first plurality of images; training the image classification neural network to identify the target in the image, based on the identification of the neurophysiological event; and storing the trained image classification neural network in a computer-readable storage medium.
System-on-a-chip incorporating artificial neural network and general-purpose processor circuitry
A circuit system and a method of analyzing audio or video input data that is capable of detecting, classifying, and post-processing patterns in an input data stream. The circuit system may consist of one or more digital processors, one or more configurable spiking neural network circuits, and digital logic for the selection of two-dimensional input data. The system may use the neural network circuits for detecting and classifying patterns and one or more the digital processors to perform further detailed analyses on the input data and for signaling the result of an analysis to outputs of the system.
Systems and methods for identifying unknown protocols associated with industrial control systems
A device may receive a hash table that includes lists of protocol detectors, wherein the hash table is generated based on historical process data identifying potential process variables associated with an industrial control system. The device may receive a packet identifying potential process variables associated with the industrial control system, and may extract, from the packet, packet data identifying a source address, a destination address, a port, and a transport protocol. The device may compare the packet data with data in the hash table to identify a set of lists of protocol detectors, and may process the packet data, with the set of lists of protocol detectors, to determine a matching protocol, no matching protocol, or a potential matching protocol for the packet. The device may perform one or more actions based on determining the matching protocol, no matching protocol, or the potential matching protocol for the packet.
Automated malware analysis that automatically clusters sandbox reports of similar malware samples
A system and a method for automatically clustering sandbox analysis reports of similar malware samples. An automated malware analysis process includes receiving from a sandbox server the sandbox analysis reports of the similar malware samples at an application programming interface (API) of the clustering server, clustering similar Uniform Resource Locators (URLs) together and clustering the sandbox analysis reports of events in sandbox reports clusters (1-n) based on the URL clustering, static properties of the malware samples and dynamic properties of the malware samples.
Method for generating web code for UI based on a generative adversarial network and a convolutional neural network
Provided is a method for generating web codes for a user interface (UI) based on a generative adversarial network (GAN) and a convolutional neural network (CNN). The method includes steps described below. A mapping relationship between display effects of a HyperText Markup Language (HTML) element and source codes of the HTML element is constructed. A location of an HTML element in an image I is recognized. Complete HTML codes of the image I are generated. The similarity between manually-written HTML codes and the generated complete HTML codes and the similarity between the image I and an image I.sub.1 generated by the generated complete HTML codes are obtained. After training, an image-to-HTML-code generation model M is obtained. A to-be-processed UI image is input into the model M so as to obtain corresponding HTML codes. According to the method of the present disclosure, an image-to-HTML-code generation model M can be obtained.
Universal feature representation learning for face recognition
A computer-implemented method for implementing face recognition includes receiving training data including a plurality of augmented images each corresponding to a respective one of a plurality of input images augmented by one of a plurality of variations, splitting a feature embedding generated from the training data into a plurality of sub-embeddings each associated with one of the plurality of variations, associating each of the plurality of sub-embeddings with respective ones of a plurality of confidence values, and applying a plurality of losses including a confidence-aware identification loss and a variation-decorrelation loss to the plurality of sub-embeddings and the plurality of confidence values to improve face recognition performance by learning the plurality of sub-embeddings.
Anomaly pattern detection system and method
Provided is an anomaly pattern detection system including an anomaly detection device connected to one or more servers. The anomaly detection device may include an anomaly detector configured to model input data by considering all of the input data as normal patterns, and detect an anomaly pattern from the input data based on the modeling result.
Anomaly pattern detection system and method
Provided is an anomaly pattern detection system including an anomaly detection device connected to one or more servers. The anomaly detection device may include an anomaly detector configured to model input data by considering all of the input data as normal patterns, and detect an anomaly pattern from the input data based on the modeling result.
Selecting learning model
According to a first aspect, it is presented a method for dynamically selecting a learning model for a sensor device. The learning model is configured for determining output data based on sensor. The method comprises the steps of: detecting a need for a new learning model for the sensor device based on performance of a currently loaded learning model in the sensor device; determining at least one feature candidate based on sensor data from the at least one sensor, wherein each one of the at least one feature candidate is associated with a different source of sensor data; selecting a new learning model, from a set of candidate learning models, based on the at least one feature candidate and input features of each one of the candidate learning models; and triggering the new learning model to be loaded on the sensor device, replacing the currently loaded learning model.
3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network
A 3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network is described, A machine learning method for low dose computed tomography (LDCT) image correction is provided. The method includes training, by a training circuitry, a neural network (NN) based, at least in part, on two-dimensional (2-D) training data. The 2-D training data includes a plurality of 2-D training image pairs. Each 2-D image pair includes one training input image and one corresponding target output image. The training includes adjusting at least one of a plurality of 2-D weights based, at least in part, on an objective function. The method further includes refining, by the training circuitry, the NN based, at least in part, on three-dimensional (3-D) training data. The 3-D training data includes a plurality of 3-D training image pairs. Each 3-D training image pair includes a plurality of adjacent 2-D training input images and at least one corresponding target output image. The refining includes adjusting at least one of a plurality of 3-D weights based, at least in part, on the plurality of 2-D weights and based, at least in part, on the objective function. The plurality of 2-D weights includes the at least one adjusted 2-D weight.