Patent classifications
G06V30/19147
System and Computer-Implemented Method for Character Recognition in Payment Card
The present disclosure relates to a system and computer-implemented method for character recognition in a payment card. The method includes receiving an image of a payment card and one or more details associated with the payment card. Further, a derivative of the image is determined based on the one or more details and a horizontal sum of pixel values is determined for a plurality of rows in the image. Furthermore, one or more Regions of Interest (ROIs) are identified in the image by comparing the horizontal sum of pixel values with a predefined first threshold. Subsequently, one or more characters in the one or more ROIs are extracted using one or more peak values in a histogram of the one or more ROIs. Finally, each of the one or more characters extracted from the one or more ROIs is recognized using a trained Artificial Intelligence technique.
CONTINUOUS MACHINE LEARNING METHOD AND SYSTEM FOR INFORMATION EXTRACTION
Methods and systems for artificial intelligence (AI)-assisted document annotation and training of machine learning-based models for document data extraction are described. The methods and systems described herein take advantage of a continuous machine learning approach to create document processing pipelines that provide accurate and efficient data extraction from documents that include structured text, semi-structured text, unstructured text, or any combination thereof.
Training a card type classifier with simulated card images
A computer model to identify a type of physical card is trained using simulated card images. The physical card may exist with various subtypes, some of which may not exist or be unavailable when the model is trained. To more robustly identify these subtypes, the training data set for the computer model includes simulated card images that are generated for the card type. The simulated card images are generated based on a semi-randomized background that varies in appearance, onto which an identifying marking of the card type is superimposed, such that the training data for the computer model includes additional randomized sample card images and ensure the model is robust to further variations in subtypes.
TEXT DETECTION METHOD, TEXT RECOGNITION METHOD AND APPARATUS
The present disclosure provides a text detection method, a text recognition method and an apparatus, which relate to the field of artificial intelligence technology, in particular to the field of deep learning and computer vision technologies, and can be applied to scenarios such as optical character recognition. The text detection method is: acquiring an image feature of a text strip in a to-be-recognized image; performing visual enhancement processing on the to-be-recognized image to obtain an enhanced feature map of the to-be-recognized image; comparing the image feature of the text strip with the enhanced feature map for similarity to obtain a target bounding box of the text strip on the enhanced feature map.
Optimizing inference time of entity matching models
Methods, systems, and computer-readable storage media for receiving input data including a set of entities of a first type and a set of entities of a second type, providing a set of features based on entities of the first type, the set of features including features expected to be included in entities of the second type, filtering entities of the second type based on the set of features to provide a sub-set of entities of the second type, and generating an output by processing the set of entities of the first type and the sub-set of entities of the second type through a ML model, the output comprising a set of matching pairs, each matching pair in the set of matching pairs comprising an entity of the set of entities of the first type and at least one entity of the sub-set of entities of the second type.
METHODS AND DEVICES FOR GENERATING TRAINING SAMPLE, TRAINING MODEL AND RECOGNIZING CHARACTER
Methods and devices for generating a training sample, training a model and recognizing a character are provided. The method for generating a training sample comprises: acquiring an image of characters, and determining respective characters contained in the image; and using a projection method to determine weights of the respective characters contained in the image, tagging the image with labels according to the weights of the respective characters contained in the image, and forming a training sample. The method for training a model comprises: using the training sample to train a character recognition model. The method for recognizing a character comprises: using the character recognition model to perform character recognition. The above methods and devices realize accurate recognition of characters, such as double-half characters, contained in an image of a wheel-type meter, and can provide a highly accurate biased recognition result.
Quotation method executed by computer, quotation device, electronic device and storage medium
Disclosed is a quotation method executed by a computer, comprising: obtaining structure parameters and electrical parameters of a product (S101); constructing an external view of the product by using the structure parameters of the product, and performing similarity comparison on the external view of the product and the external view of a historical product to obtain an appearance similarity sorting (102); performing similarity comparison on the electrical parameters of the product and the electrical parameters of the historical product to obtain an electrical parameter similarity sorting (103); on the basis of the cost weights of a structural member and an electrical component and the appearance similarity sorting and the electrical parameter similarity sorting, obtaining a comprehensive sorting which is based on the structure parameters and the electrical parameters (S104); and determining, based on the comprehensive sorting, a bill of materials of the product, and calculating, based on the bill of the materials of the product, the product quotation (105).
Method and system for distributed learning and adaptation in autonomous driving vehicles
The present teaching relates to system, method, medium for in-situ perception in an autonomous driving vehicle. A plurality of types of sensor data acquired continuously by a plurality of types of sensors deployed on the vehicle are first received, where the plurality of types of sensor data provide information about surrounding of the vehicle. Based on at least one model, one or more items are tracked from a first of the plurality of types of sensor data acquired by one or more of a first type of the plurality of types of sensors, wherein the one or more items appear in the surrounding of the vehicle. At least some of the one or more items are then automatically labeled on-the-fly via either cross modality validation or cross temporal validation of the one or more items and are used to locally adapt, on-the-fly, the at least one model in the vehicle.
METHOD FOR TRAINING IMAGE-TEXT MATCHING MODEL, COMPUTING DEVICE, AND STORAGE MEDIUM
A computer-implemented method is provided. The method includes: obtaining a sample text and a sample image corresponding to the sample text; labeling a true semantic tag for the sample text according to a first preset rule; obtaining a text feature representation of the sample text and a predicted semantic tag output by a text coding sub-model; obtaining an image feature representation of the sample image output by an image coding sub-model; calculating a first loss based on the true semantic tag and the predicted semantic tag; calculating a contrast loss based on the text feature representation of the sample text and the image feature representation of the sample image; adjusting parameters of the text coding sub-model based on the first loss and the contrast loss; and adjusting parameters of the image coding sub-model based on the contrast loss.
Systems, methods, and techniques for training neural networks and utilizing the neural networks to detect non-compliant content
A system can include one or more processors and one or more non-transitory computer-readable storage media storing computing instructions configured to run on the one or more processors and perform: generating a training dataset for training a neural network detection model; identifying, using the neural network detection model, as trained, the non-compliant content in the synthetic training images; receiving, at the neural network detection model, at least one image; and utilizing the neural network detection model to determine whether the at least one image comprises the non-compliant content. Other embodiments are disclosed herein.