G06V30/19133

Machine learning for document authentication

Computer systems and methods are provided for using a machine learning system to analyze authentication information. First authentication information for a first transaction includes at least a first image that corresponds to a first identification document is received. First validation information that corresponds to a first validation fault is received from a validation system. Data storage of a machine learning system stores the first validation information. Second authentication information for a second transaction includes a second image that corresponds to a second image is received. The machine learning system determines a first validation value that corresponds to a probability that the second image includes the first validation fault. The first validation value is used to determine whether fault review criteria are met. In accordance with a determination that the fault review criteria are met, the second image is transmitted to the validation system.

PHRASE RECOGNITION MODEL FOR AUTONOMOUS VEHICLES

Aspects of the disclosure relate to training and using a phrase recognition model to identify phrases in images. As an example, a selected phrase list may include a plurality of phrases is received. Each phrase of the plurality of phrases includes text. An initial plurality of images may be received. A training image set may be selected from the initial plurality of images by identifying the phrase-containing images that include one or more phrases from the selected phrase list. Each given phrase-containing image of the training image set may be labeled with information identifying the one or more phrases from the selected phrase list included in the given phrase-containing images. The model may be trained based on the training image set such that the model is configured to, in response to receiving an input image, output data indicating whether a phrase of the plurality of phrases is included in the input image.

Phrase recognition model for autonomous vehicles

Aspects of the disclosure relate to training and using a phrase recognition model to identify phrases in images. As an example, a selected phrase list may include a plurality of phrases is received. Each phrase of the plurality of phrases includes text. An initial plurality of images may be received. A training image set may be selected from the initial plurality of images by identifying the phrase-containing images that include one or more phrases from the selected phrase list. Each given phrase-containing image of the training image set may be labeled with information identifying the one or more phrases from the selected phrase list included in the given phrase-containing images. The model may be trained based on the training image set such that the model is configured to, in response to receiving an input image, output data indicating whether a phrase of the plurality of phrases is included in the input image.

Systems and user interfaces for enhancement of data utilized in machine-learning based medical image review

Systems and techniques are disclosed for improvement of machine learning systems based on enhanced training data. An example method includes generating an interactive classification user interface concurrently displaying a first group of medical images and a second group of medical images, each group depicting objects associated with a respective classification. User input indicating movement of medical images from the first group to the second group is detected. The moved medical images are classified according to the second group. The re-classified medical images are provided to a machine learning system, with the machine learning system updating based on analysis of object characteristics of the re-classified medical images to increase accuracies associated with automated assignment of classifications.

ONLINE, INCREMENTAL REAL-TIME LEARNING FOR TAGGING AND LABELING DATA STREAMS FOR DEEP NEURAL NETWORKS AND NEURAL NETWORK APPLICATIONS

Today, artificial neural networks are trained on large sets of manually tagged images. Generally, for better training, the training data should be as large as possible. Unfortunately, manually tagging images is time consuming and susceptible to error, making it difficult to produce the large sets of tagged data used to train artificial neural networks. To address this problem, the inventors have developed a smart tagging utility that uses a feature extraction unit and a fast-learning classifier to learn tags and tag images automatically, reducing the time to tag large sets of data. The feature extraction unit and fast-learning classifiers can be implemented as artificial neural networks that associate a label with features extracted from an image and tag similar features from the image or other images with the same label. Moreover, the smart tagging system can learn from user adjustment to its proposed tagging. This reduces tagging time and errors.

METHOD AND APPARATUS FOR RECOGNIZING HANDWRITTEN CHARACTERS USING FEDERATED LEARNING
20200005081 · 2020-01-02 · ·

Provided is a method for recognizing handwritten characters in a terminal through federated learning. In the method, a first common prediction model for recognizing text from handwritten characters input from a user is applied, the handwritten characters are received from the user, feature values are extracted from an image including the handwritten characters, the feature values are input to the first common prediction mode, first text information is determined from an output of the first common prediction model, the first text information and a second text information received from the user for error correction of the first text information are cached, and the first common prediction model is learned using the image including the handwritten characters, the first text information, and the second text information. In this way, the terminal can determine the text from the handwritten characters input by the user, and can learn the first common prediction model through a feedback operation of the user.

PHRASE RECOGNITION MODEL FOR AUTONOMOUS VEHICLES

Aspects of the disclosure relate to training and using a phrase recognition model to identify phrases in images. As an example, a selected phrase list may include a plurality of phrases is received. Each phrase of the plurality of phrases includes text. An initial plurality of images may be received. A training image set may be selected from the initial plurality of images by identifying the phrase-containing images that include one or more phrases from the selected phrase list. Each given phrase-containing image of the training image set may be labeled with information identifying the one or more phrases from the selected phrase list included in the given phrase-containing images. The model may be trained based on the training image set such that the model is configured to, in response to receiving an input image, output data indicating whether a phrase of the plurality of phrases is included in the input image.

Collaborative text detection and text recognition
11907977 · 2024-02-20 · ·

Described are approaches for assigning tasks between machine resources (e.g., AI task performers, AI task validators), human resources (e.g., task performers, task validators), and/or other smart systems to facilitate collaborative text detection, text recognition, and text retrieval in order to optimize system performance along a variety of different selection criteria specifying various performant dimensions, including, but not limited to improving system efficiency, reducing task performer and/or task validator idle time, improving triage outcomes, reducing data processing loads, maintaining client confidentiality, etc., that may be associated with one or more customers.

Machine Learning for Document Authentication
20190372968 · 2019-12-05 ·

Computer systems and methods are provided for using a machine learning system to analyze authentication information. First authentication information for a first transaction includes at least a first image that corresponds to a first identification document is received. First validation information that corresponds to a first validation fault is received from a validation system. Data storage of a machine learning system stores the first validation information. Second authentication information for a second transaction includes a second image that corresponds to a second image is received. The machine learning system determines a first validation value that corresponds to a probability that the second image includes the first validation fault. The first validation value is used to determine whether fault review criteria are met. In accordance with a determination that the fault review criteria are met, the second image is transmitted to the validation system.

Optical character recognition employing deep learning with machine generated training data

An optical character recognition system employs a deep learning system that is trained to process a plurality of images within a particular domain to identify images representing text within each image and to convert the images representing text to textually encoded data. The deep learning system is trained with training data generated from a corpus of real-life text segments that are generated by a plurality of OCR modules. Each of the OCR modules produces a real-life image/text tuple, and at least some of the OCR modules produce a confidence value corresponding to each real-life image/text tuple. Each OCR module is characterized by a conversion accuracy substantially below a desired accuracy for an identified domain. Synthetically generated text segments are produced by programmatically converting text strings to a corresponding image where each text string and corresponding image form a synthetic image/text tuple.