Patent classifications
G06F18/2163
Machine learning-based root cause analysis of process cycle images
The technology disclosed relates to classification of process cycle images to predict success or failure of process cycles. The technology disclosed includes capturing and processing images of sections arranged on an image generating chip in genotyping process. Image description features of production cycle images are created and given as input to classifiers. A trained classifier separates successful production images from unsuccessful or failed production images. The failed production images are further classified by a trained root cause classifier into various categories of failure.
Behaviour modeling, verification, and autonomous actions and triggers of ML and AI systems
An exemplary embodiment may present a behavior modeling architecture that is intended to assist in handling, modelling, predicting and verifying the behavior of machine learning models to assure the safety of such systems meets the required specifications and adapt such architecture according to the execution sequences of the behavioral model. An embodiment may enable conditions in a behavioral model to be integrated in the execution sequence of behavioral modeling in order to monitor the probability likelihoods of certain paths in a system. An embodiment allows for real-time monitoring during training and prediction of machine learning models. Conditions may also be utilized to trigger system-knowledge injection in a white-box model in order to maintain the behavior of a system within defined boundaries. An embodiment further enables additional formal verification constraints to be set on the output or internal parts of white-box models.
Deep feature extraction and training tools and associated methods
Deep feature extraction and training tools and processes may facilitate extraction and understanding of deep features utilized by deep learning models. For example, imaging data may be tessellated and masked to generate a plurality of masked images. The masked images may be processed by a deep learning model to generate a plurality of masked outputs. The masked outputs may be aggregated for each cell of the tessellated image and compared to an original output for the imaging data from the deep learning model. Individual cells and associated image regions having masked outputs that correspond to the original output may comprise deep features utilized by the deep learning model.
DESIGN OPTIMIZATION AND USE OF CODEBOOKS FOR DOCUMENT ANALYSIS
A method of generating and optimizing a codebooks for document analysis comprises: receiving a first set of document images; extracting a plurality of keypoint regions from each document image of the first set of document images; calculating local descriptors for each keypoint region of the extracted keypoint regions; clustering the local descriptors such that each center of a cluster of local descriptors corresponds to a respective visual word; generating a codebook containing a set of visual words; and optimizing the codebook by maximizing mutual information (MI) between a target field of a second set of document images and at least one visual word of the set of visual words.
METHOD OF STABLE LASSO MODEL STRUCTURE LEARNING TO BUILD INFERENTIAL SENSORS
A stabilization method and mechanism for model structure learning is described. A model is built based on a full data set. The full data set is partitioned into cross validation (CV) folds. A set of model structures of the model are cross validated for each CV fold while penalizing structural deviations from the model to determine CV errors. A model structure is selected from the set of model structures based on a comparison of CV errors with an industrial data set.
Abnormality detection apparatus and vehicle system
An abnormality detection apparatus including a feature extraction circuit configured to extract a feature point and a feature value of a first image, and a feature point and a feature value of a second image, a flow calculation circuit configured to calculate, based on the feature value of the first image, a first abnormality detection circuit configured to detect an abnormality in the first image based on a first optical flow, and to detect an abnormality in the second image based on a third optical flow, and a second abnormality detection circuit configured to detect an abnormality in the first or second image based on a result of a comparison between the second optical flow and a fourth optical flow.
Systems and methods for image preprocessing
A method and apparatus of a device that classifies an image is described. In an exemplary embodiment, the device segments the image into a region of interest that includes information useful for classification and a background region by applying a first convolutional neural network. In addition, the device tiles the region of interest into a set of tiles. For each tile, the device extracts a feature vector of that tile by applying a second convolutional neural network, where the features of the feature vectors represent local descriptors of the tile. Furthermore, the device processes the extracted feature vectors of the set of tiles to classify the image.
INTELLIGENT IMAGE SEGMENTATION PRIOR TO OPTICAL CHARACTER RECOGNITION (OCR)
A medical device monitoring system and method extract information from screen images from medical device controllers, with a single OCR process invocation per screen image, despite critical information appearing in different screen locations, depending on which medical device controller's screen image is processed. For example, different software versions of the medical device controllers might display the same type of information in different screen locations. Copies of the critical screen information, one copy from each different screen location, are made in a mosaic image, and then the mosaic image is OCR processed to produce text results. Text is selectively extracted from the OCR text results, depending on contents of a selector field on the screen image, such as a software version number or a heart pump model identifier.
Method and apparatus for fusing position information, and non-transitory computer-readable recording medium
A method and an apparatus for fusing position information, and a non-transitory computer-readable recording medium are provided. In the method, words of an input sentence are segmented to obtain a first sequence of words in the input sentence, and absolute position information of the words in the first sequence is generated. Then, subwords of the words in the first sequence are segmented to obtain a second sequence including subwords, and position information of the subwords in the second sequence are generated, based on the absolute position information of the words in the first sequence, to which the respective subwords belong. Then, the position information of the subwords in the second sequence are fused into a self-attention model to perform model training or model prediction.
MULTI-CLASS CLASSIFICATION USING A DUAL MODEL
A method for receiving a full training data set including a plurality of individual training data set, dividing the plurality of individual training sets into N classes, where N is an integer greater than three, dividing the N classes into M full data classes and N-M partial data classes, performing training to obtain a trained fixed size machine learning (ML) classification model and a trained in-class confidence model, outputting a first set of prediction value(s) based on the performance of training, distributing each class of the N classes of individual training data sets to a different node of a distributed machine learning system; and outputting, from the nodes of the distributed machine learning system, a second set of prediction value(s) for each class of the N classes.