Patent classifications
G06V10/7784
Validation method and system to improve data accuracy
An automated method and system for validating (cross-validating) data fields in an electronic document, such as a document that has been passed through an optical character recognition (“OCR”) or Intelligent Document Recognition (“IDR”) system or software, to improve accuracy of the electronic document.
Continuously learning, stable and robust online machine learning system
An Online Machine Learning System (OMLS) including an Online Preprocessing Engine (OPrE) configured to (a) receive streaming data including an instance comprising a vector of inputs, the vector of inputs comprising a plurality of continuous or categorical features; (b) discretize features; (c) impute missing feature values; (d) normalize features; and (e) detect drift or change in features; an Online Feature Engineering Engine (OFEE) configured to produce features; and an Online Robust Feature Selection Engine (ORFSE) configured to evaluate and select features; an Online Machine Learning Engine (OMLE) configured to incorporate and utilize one or more machine learning algorithms or models utilizing features to generate a result, and capable of incorporating and utilizing multiple different machine learning algorithms or models, wherein each of the OMLE, the OPrE, the OFEE, and the ORFSE are continuously communicatively coupled to each other, and wherein the OMLS is configured to perform continuous online machine learning.
Nuclear image processing method
A nuclear image processing method is provided. The method includes the following steps: inputting a normalized standard space nuclear image; selecting a voxel of the normalized standard space nuclear image and collecting the values of the neighbor voxels to form a voxel value set; conducting a data augmentation algorithm to generate a voxel distribution function; calculating an expected value of the distribution and calculating a first standard deviation of the portion over the expected value and a second standard deviation of the portion lower than the expected value; repeating the above steps to calculate the expected value, the first standard deviation and the second standard deviation of the necessary voxels, so as to form an image standardization template set including expected value template, first standard deviation template and the second standard deviation template.
Method for training defect detector
A method for training a defect detector comprises: obtaining a first reference image of a first reference object, wherein the first reference object has a defect and the first reference image has a first label indicating the defect; training a reconstruction model according to a second reference image of a second reference object associated with the first reference object, wherein a defect level of the second reference object is in a tolerable range with an upper limit; obtaining a target image of a target object associated with the first reference object and the second reference object; generating a second label according to the target image, the reconstruction model and an error calculation procedure, wherein the second label comprises a defect of the target object; and training a defect detector by performing a machine learning algorithm according to the first reference image, the target image and the second label.
MACHINE LEARNING SYSTEM AND METHOD FOR DETERMINING OR INFERRING USER ACTION AND INTENT BASED ON SCREEN IMAGE ANALYSIS
System(s) and method(s) that analyze image data associated with a computing screen operated by a user, and learns the image data (e.g., using pattern recognition, historical information analysis, user implicit and explicit training data, optical character recognition (OCR), video information, 360°/panoramic recordings, and so on) to concurrently glean information regarding multiple states of user interaction (e.g., analyzing data associated with multiple applications open on a desktop, mobile phone or tablet). A machine learning model is trained on analysis of graphical image data associated with screen display to determine or infer user intent. An input component receives image data regarding a screen display associated with user interaction with a computing device. An analysis component employs the model to determine or infer user intent based on the image data analysis; and an action component provisions services to the user as a function of the determined or inferred user intent. In an implementation, a gaming component gamifies interaction with the user in connection with explicitly training the model.
Systems and methods for removing background noise in an industrial pump environment
Methods and systems for monitoring a plurality of components of a pump in an industrial environment include a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of a plurality of input sensors operationally coupled to the pump and communicatively coupled to the data acquisition circuit; a data processing circuit structured to utilize at least one of the plurality of detection values to perform at least one noise processing operation on at least a portion of the plurality of detection values; a signal evaluation circuit structured to determine a pump performance parameter in response to the noise processed plurality portion of the plurality of detection values; and a response circuit structured to perform at least one operation in response to the pump performance parameter.
Diagnostic tool for deep learning similarity models
A diagnostic tool for deep learning similarity models and image classifiers provides valuable insight into neural network decision-making. A disclosed solution generates a saliency map by: receiving a baseline image and a test image; determining, with a convolutional neural network (CNN), a first similarity between the baseline image and the test image; based on at least determining the first similarity, determining, for the test image, a first activation map for at least one CNN layer; based on at least determining the first similarity, determining, for the test image, a first gradient map for the at least one CNN layer; and generating a first saliency map as an element-wise function of the first activation map and the first gradient map. Some examples further determine a region of interest (ROI) in the first saliency map, cropping the test image to an area corresponding to the ROI, and determine a refined similarity score.
SYSTEMS AND METHODS FOR OBJECT DETECTION IN EXTREME LOW-LIGHT CONDITIONS
Systems and methods for detecting objects in photon-limited environments is disclosed for use in, for example, security, defense, life science, autonomous vehicles, and various consumer and medical applications. At least one embodiment integrates a non-local feature aggregation method and a knowledge distillation method with state-of-the-art detector networks. The two methods offer better feature representations for photonlimited images. In comparison with baseline systems, detectors according to embodiments of the present disclosure demonstrate superior performance in synthetic and real environments. When embodiments are applied to the latest photon counting devices, object detection can be achieved at a photon level of 1 photon per pixel or lower, significantly surpassing the capabilities of existing CMOS image sensors and algorithms.
Estimation apparatus, learning apparatus, estimation method, learning method, and program
An estimation apparatus, a learning apparatus, an estimation method, and a learning method, and a program capable of accurate body tracking without attaching many trackers to a user are provided. A feature extraction section (68) outputs feature data indicating a feature of a time-series transition until a latest timing in response to an input of input data that contains region data indicating a position, a posture, or a motion about a region of a body at the latest timing and feature data indicating the feature of the time-series transition previously output from the feature extraction section (68) at a timing preceding the latest timing. An estimation section (72) estimates a position, a posture, or a motion of another region of a body closer to a center of the body than the region at the latest timing on the basis of the feature data indicating the feature of the time-series transition until the latest timing.
Systems and methods for fine tuning image classification neural networks
An authentication engine, residing at one or more computing machines, receives, from a vision device comprising one or more cameras, a probe image. The authentication engine generates, using a trained facial classification neural engine, one or more first labels for a person depicted in the probe image and a probability for at least one of the one or more first labels. The authentication engine determines that the probability is within a predefined low accuracy range. The authentication engine generates, using a supporting engine, a second label for the person depicted in the probe image. The supporting engine operates independently of the trained facial classification neural engine. The authentication engine further trains the facial classification neural engine based on the second label.