G06V30/41

METHOD AND APPARATUS FOR IDENTIFYING CHARACTERISTICS OF TRADING CARDS
20230222641 · 2023-07-13 ·

Methods and apparatuses for detecting characteristics of a card include a method for identifying a card containing foil, the foil detection method comprising converting an image file depicting a card into a hue, saturation, value (“HSV”) colour space, applying a value mask to exclude a group of pixels from analysis that do not form an intensified brightness area, and comparing the number of remaining pixels against a threshold to determine whether the card contains foil. In another aspect, a method for assigning a condition grading to a card comprises converting an image file depicting a diffusely illuminated card into an HSV colour space, isolating a uniform portion of the converted image file, applying a value mask to the converted image file to exclude pixels forming an undamaged portion of the card, and comparing the number of remaining pixels to a plurality of grading thresholds to assign a condition grading.

METHODS AND SYSTEMS FOR RECEIPT CAPTURING PROCESS

A receipt capture tool residing on a customer mobile device may be initiated when a customer completes an in-store or online purchase. The receipt capture tool may prompt the customer to capture an image of a receipt detailing a purchase and an item (e.g., product or service) purchased. For instance, the photo of a physical receipt may be taken by the mobile device, or an electronic receipt or email detailing the purchasing transmitted from a physical merchant or online merchant server may be stored. Receipt information may be extracted and saved with other information pertinent to the item purchased, including warranty information. If the customer needs to return or repair the item purchased at a future date, the receipt and warranty information may be subsequently accessed via their mobile device. The receipt and warranty information may also be stored in a searchable database to facilitate easy retrieval by the customer.

METHODS AND SYSTEMS FOR RECEIPT CAPTURING PROCESS

A receipt capture tool residing on a customer mobile device may be initiated when a customer completes an in-store or online purchase. The receipt capture tool may prompt the customer to capture an image of a receipt detailing a purchase and an item (e.g., product or service) purchased. For instance, the photo of a physical receipt may be taken by the mobile device, or an electronic receipt or email detailing the purchasing transmitted from a physical merchant or online merchant server may be stored. Receipt information may be extracted and saved with other information pertinent to the item purchased, including warranty information. If the customer needs to return or repair the item purchased at a future date, the receipt and warranty information may be subsequently accessed via their mobile device. The receipt and warranty information may also be stored in a searchable database to facilitate easy retrieval by the customer.

Character recognition method and apparatus, electronic device, and storage medium

A method, apparatus, electronic device, and storage medium for character recognition are provided. The method may perform image processing on an acquired original image to obtain a region to be recognized. The region may include a character. The method may determine an area ratio of the region to be recognized on the original image. The method may determine an angle between the region to be recognized and a preset direction. The method may determine a character density of the region to be recognized. The method may perform character recognition on the character in the region to be recognized in response to determining that the area ratio is greater than a ratio threshold, the angle is less than an angle threshold, and the character density is less than a density threshold.

Image analysis based document processing for inference of key-value pairs in non-fixed digital documents

An online system extracts information from non-fixed form documents. The online system receives an image of a form document and obtains a set of phrases and locations of the set of phrases on the form image. For at least one field, the online system determines key scores for the set of phrases. The online system identifies a set of candidate values for the field from the set of identified phrases and identifies a set of neighbors for each candidate value from the set of identified phrases. The online system determines neighbor scores, where a neighbor score for a candidate value and a respective neighbor is determined based on the key score for the neighbor and a spatial relationship of the neighbor to the candidate value. The online system selects a candidate value and a respective neighbor based on the neighbor score as the value and key for the field.

SYSTEMS AND METHODS FOR GENERATING TITLE REPORTS
20220414806 · 2022-12-29 ·

The present disclosure provides systems and methods for generating title reports in standardized formats and providing an environment for exchanging the title reports. The system can receive property documents from a plurality of sources. The property documents can be appended with metadata so as to enable the system to order the property documents within a title report. In some examples, generated title reports can accessible within a cloud environment for requesting entities to access.

METHOD AND SYSTEM FOR EXTRACTING INFORMATION FROM A DOCUMENT

A method and computing apparatus for extracting information from a document are provided. The method includes receiving a document, extracting data from the document, assigning the document to a category from among a predetermined plurality of categories based on a result of the extracted data, and generating a structured output by formatting the extracted data based on the assigned category.

AUTOMATIC RULE PREDICTION AND GENERATION FOR DOCUMENT CLASSIFICATION AND VALIDATION

A method is provided. The method may include, in response to electronically receiving a document, automatically classifying the document and different parts of the document, by electronically identifying a document type associated with the document and electronically tagging data associated with the different parts of the document based on classification rules. The method may further include automatically extracting the tagged data associated with the automatically classified document based on data extraction rules. The method may further include detecting first feedback associated with the classification rules and second feedback associated with the data extraction rules. The method may further include automatically generating and updating validation rules based on the identified document type, the detected first feedback, and the detected second feedback to validate the automatically classified document and the automatically tagged and extracted data.

AUTOMATED TELLER MACHINE FOR DETECTING SECURITY VULNERABILITIES BASED ON DOCUMENT NOISE REMOVAL
20220398900 · 2022-12-15 ·

An Automated Teller Machine (ATM) for detecting security vulnerabilities by removing noise artifacts from documents receives a transaction request when a document is inserted into the ATM, where the document contains a noise artifact at least partially obstructing a portion of the document. The ATM generates an image of the document, where the image displays at least one data item comprising a sender's name, a receiver's name, and a number representing an amount. The ATM determines whether the noise artifact obstructs at least partially one data item. In response to determining that the noise artifact obstructs at least partially one data item, the ATM generates a test clean image of the document by removing the noise artifact from the image. In response to determining that the noise artifact is removed, the ATM approves the transaction request.

ADAPTING IMAGE NOISE REMOVAL MODEL BASED ON DEVICE CAPABILITIES
20220398694 · 2022-12-15 ·

A system for adapting an image noise removal model based on a device processing capability receives, from a computing device, a request to adapt an image noise removal module for the computing device. The system compares a processing capability of the computing device with a threshold processing capability. The system determines whether the processing capability is greater or smaller than the threshold processing capability. In response to determining that the processing capability is greater than the threshold processing capability, the system sends a version of the image noise removal module that is adapted for computing devices with processing capabilities less than the threshold processing capability, where the version of the image noise removal module is adapted to have a number of neural network layers less than a threshold number of neural network layers.