Patent classifications
G06V30/19113
Optical receipt processing
Techniques for providing improved optical character recognition (OCR) for receipts are discussed herein. Some embodiments may provide for a system including one or more servers configured to perform receipt image cleanup, logo identification, and text extraction. The image cleanup may include transforming image data of the receipt by using image parameters values that optimize the logo identification, and performing logo identification using a comparison of the image data with training logos associated with merchants. When a merchant is identified, a second image clean up may be performed by using image parameter values optimized for text extraction. A receipt structure may be used to categorize the extracted text. Improved OCR accuracy is also achieved by applying on format rules of the receipt structure to the extracted text.
Image analysis system for testing in manufacturing
A vision analytics and validation (VAV) system for providing an improved inspection of robotic assembly, the VAV system comprising a trained neural network three-way classifier, to classify each component as good, bad, or do not know, and an operator station configured to enable an operator to review an output of the trained neural network, and to determine whether a board including one or more bad or a do not know classified components passes review and is classified as good, or fails review and is classified as bad. In one embodiment, a retraining trigger to utilize the output of the operator station to train the trained neural network, based on the determination received from the operator station.
On-device two step approximate string matching
A personalized preview system to receive a request to access a collection of media items from a user of a user device. Responsive to receiving the request to access the collection of media items, the personalized preview system accesses user profile data associated with the user, wherein the user profile data includes an image. For example, the image may comprise a depiction of a face, wherein the face comprises a set of facial landmarks. Based on the image, the personalized preview system generates one or more media previews based on corresponding media templates and the image, and displays the one or more media previews within a presentation of the collection of media items at a client device of the user.
Predicting missing entity identities in image-type documents
Techniques for predicting a missing value in an image-type document are disclosed. A system predicts the identity of a supplier associated with an image-type document in which the supplier's identity may not be extracted by text recognition. When a system determines that the supplier identity cannot be identified using a text recognition application, the system generates a set of machine learning model input features from features extracted from the image-type document to predict the supplier's identity. One input feature is a data file bounds feature indicating whether the image-type document is a scanned document or a non-scanned document. The system predicts a value for the supplier's identity based on the data file bounds value and additional feature values, including color channel characteristics and spatial characteristics of regions-of-interest. The system generates a mapping of values to defined attributes based in part on the predicted value for the supplier's identity.
ON-DEVICE TWO STEP APPROXIMATE STRING MATCHING
A personalized preview system to receive a request to access a collection of media items from a user of a user device. Responsive to receiving the request to access the collection of media items, the personalized preview system accesses user profile data associated with the user, wherein the user profile data includes an image. For example, the image may comprise a depiction of a face, wherein the face comprises a set of facial landmarks. Based on the image, the personalized preview system generates one or more media previews based on corresponding media templates and the image, and displays the one or more media previews within a presentation of the collection of media items at a client device of the user.
Predicting Missing Entity Identities In Image-Type Documents
Techniques for predicting a missing value in an image-type document are disclosed. A system predicts the identity of a supplier associated with an image-type document in which the supplier's identity may not be extracted by text recognition. When a system determines that the supplier identity cannot be identified using a text recognition application, the system generates a set of machine learning model input features from features extracted from the image-type document to predict the supplier's identity. One input feature is a data file bounds feature indicating whether the image-type document is a scanned document or a non-scanned document. The system predicts a value for the supplier's identity based on the data file bounds value and additional feature values, including color channel characteristics and spatial characteristics of regions-of-interest. The system generates a mapping of values to defined attributes based in part on the predicted value for the supplier's identity.
Image Analysis System for Testing in Manufacturing
A vision analytics and validation (VAV) system for providing an improved inspection of robotic assembly, the VAV system comprising a trained neural network three-way classifier, to classify each component as good, bad, or do not know, and an operator station configured to enable an operator to review an output of the trained neural network, and to determine whether a board including one or more bad or a do not know classified components passes review and is classified as good, or fails review and is classified as bad. In one embodiment, a retraining trigger to utilize the output of the operator station to train the trained neural network, based on the determination received from the operator station.
Methods, systems, articles of manufacture, and apparatus to tag segments in a document
Methods, apparatus, systems, and articles of manufacture are disclosed to tag segments in a document. An example apparatus includes processor circuitry to execute machine readable instructions to generate node embeddings for nodes of a graph, the node embeddings based on features extracted from text segments detected in a document, the text segments to be represented by the nodes of the graph; sample edges corresponding to the nodes to generate the graph; generate first updated node embeddings by passing the node embeddings and the graph through layers of a graph neural network, the first updated embeddings corresponding to the node embeddings augmented with neighbor information; generate second updated node embeddings by passing the first updated embeddings through layers of a recurrent neural network, the second updated embeddings corresponding to the first updated node embeddings augmented with sequential information; and classify the text segments based on the second updated node embeddings.
Date and time feature identification
Methods and systems for text processing include building a knowledge base using column names and associated functions from a code base. Classifiers are trained using the knowledge base and are cross-validated to determine accuracy scores. Text is processed using a selected classifier having a highest accuracy score from the classifiers to determine date/time features.