Patent classifications
G06F18/2433
Machine learning-based root cause analysis of process cycle images
The technology disclosed relates to classification of process cycle images to predict success or failure of process cycles. The technology disclosed includes capturing and processing images of sections arranged on an image generating chip in genotyping process. Image description features of production cycle images are created and given as input to classifiers. A trained classifier separates successful production images from unsuccessful or failed production images. The failed production images are further classified by a trained root cause classifier into various categories of failure.
ANOMALY DETECTING METHOD IN SEQUENCE OF CONTROL SEGMENT OF AUTOMATION EQUIPMENT USING GRAPH AUTOENCODER
Disclosed is a method of analyzing a programmable logic controller (PLC) logic to detect whether an anomaly that deviates from a standard pattern occurs in a repeated cycle. After modeling and patterning an operation pattern of automation equipment and processes with a graph, an anomaly detecting model capable of detecting whether a pattern is abnormal may be constructed as a graph AutoEncoder model. By detecting the change in the process pattern, it is possible to early detect the anomaly of the equipment and processes.
System and Method for Dimensioning Target Objects
A method comprising obtaining, from a sensor, depth data representing a target object; selecting a model to fit to the depth data; for each data point in the depth data: defining a ray from a location of the sensor to the data point; and determining an error based on a distance from the data point to the model along the ray; when the depth data does not meet a similarity threshold for the model based on the determined errors, selecting a new model and repeating the error determination for the depth data based on the new model; when the depth data meets the similarity threshold for the model, selecting the model as representing the target object; and outputting the selected model representing the target object.
Method for learning a vehicle behavior of a monitored automobile and a respective automobile
A vehicle behavior of a monitored vehicle is learned. A vehicle illumination of the monitored vehicle is detected and monitored. If a light-pattern occurs in the detected vehicle illumination, wherein the light-pattern corresponds to a frequency, intensity and/or color dependent glowing of the vehicle illumination, and further wherein the light-pattern starts with a flashing up of the detected vehicle illumination and ends after a certain time without glowing of the respective part of the detected vehicle illumination, then the method further monitors the light-pattern; monitors a vehicle movement of the monitored vehicle during the occurrence of the light-pattern; and compares the monitored light-pattern with a known light-pattern from a light-pattern data entry stored in an light-pattern database. If the comparison results in the monitored light-pattern being unknown, the method stores the light-pattern and the vehicle movement together as a new light-pattern data entry in the light-pattern database.
EXPLAINABLE RESPONSE TIME PREDICTION OF STORAGE ARRAYS DETECTION
An outlier detection mechanism is disclosed that improves transparency and explainability in machine learning models. The outlier detection mechanism can quantify, at prediction time, how a new observation differs from training observations. The outlier detection mechanism can also provide a way to aggregate outputs from decision trees by weighting the outputs of the decision trees based on their explainability.
EXPLAINABLE RESPONSE TIME PREDICTION OF STORAGE ARRAYS DETECTION
An outlier detection mechanism is disclosed that improves transparency and explainability in machine learning models. The outlier detection mechanism can quantify, at prediction time, how a new observation differs from training observations. The outlier detection mechanism can also provide a way to aggregate outputs from decision trees by weighting the outputs of the decision trees based on their explainability.
Artificial intelligence system for inspecting image reliability
A system for inspecting the reliability of an image. The system may include a processor in communication with a client device; and a storage medium. The storage medium may store instructions that, when executed, configure the processor to perform operations including: obtaining a plurality of images; categorizing the images into a plurality of image classes; calculating a plurality of probability outcomes; determining whether highest predicted probabilities of the images are less than a first threshold and whether an entropy of a predicted density of the probability outcomes exceeds a second threshold; indicating whether the image is associated with the image classes; ranking, the image amongst the plurality of images; filtering, a plurality of low reliability images according to a third threshold; providing, a likelihood of whether a user scanned a vehicle object associated with the image; and identifying a percentage of user scan failures.
MODELING METHOD AND APPARATUS
A modeling method and an apparatus are disclosed. The method includes: obtaining a first data set of a first indicator, and determining, based on the first data set, a second indicator similar to the first indicator; and determining a first model based on one or more second models associated with the second indicator. The first model is used to detect a status of the first indicator, and the status of the first indicator includes an abnormal state or a normal state. The second models are used to detect a status of the second indicator, and the status of the second indicator includes an abnormal state or a normal state.
System and method for detecting backdoor attacks in convolutional neural networks
Described is a system for detecting backdoor attacks in deep convolutional neural networks (CNNs). The system compiles specifications of a pretrained CNN into an executable model, resulting in a compiled model. A set of Universal Litmus Patterns (ULPs) are fed through the compiled model, resulting in a set of model outputs. The set of model outputs are classified and used to determine presence of a backdoor attack in the pretrained CNN. The system performs a response based on the presence of the backdoor attack.
Classification in hierarchical prediction domains
There is a need for solutions that classification solutions in hierarchical prediction domains. This need can be addressed by, for example, performing one or more online machine learning, co-occurrence analysis machine learning, structured fusion machine learning, and unstructured fusion machine learning. In one example, structured predictions inputs are processed in accordance with an online machine learning analysis to generate structurally hierarchical predictions and in accordance with a co-occurrence analysis machine learning analysis to generate structurally non-hierarchical predictions. Then, the structurally hierarchical predictions and the structurally non-hierarchical predictions in accordance with processed by a structured fusion model to generate structure-based predictions. Afterward, the structure-based predictions and non-structure-based predictions are processed in accordance with an unstructured fusion model to generate one or more unstructured-fused predictions.