G06V30/413

System and method thereof for determining vendor's identity based on network analysis methodology
11580304 · 2023-02-14 · ·

A system and method for classifying digital images is presented. The method includes extracting a plurality of descriptive data items of a transaction evidence from a digital image indicating a plurality of purchased items; searching in data source for informative data based on the extracted plurality of descriptive data items, wherein the informative data includes a price; determining a correlated amount for each of at least one of the plurality of descriptive data items, wherein the correlated amount determined for one of the descriptive data items defines a paid price for the descriptive data item; determining, based on at least one expense type classification rule, a primary expense type of the transaction evidence, wherein the at least one expense type classification rule is applied to the plurality of descriptive data items and each of the correlated amount; and classifying the digital image based on the primary expense type.

METHOD OF DETECTING, SEGMENTING AND EXTRACTING SALIENT REGIONS IN DOCUMENTS USING ATTENTION TRACKING SENSORS

A method and system for detecting, segmenting, and extracting salient regions in documents by using attention tracking sensors is provided. The method includes: receiving an image that corresponds to a document; receiving, from a sensor, a sequence of measurements that correspond to a human reading of the document; determining, based on the sequence of measurements, at least one region of the document as being a salient document region; demarcating the salient document region in an electronically displayable manner; and outputting a file that includes a displayable version of the document with the demarcated document region. The salient document region may include a title, a section header, and/or a table. The sensor may be an eye-tracking sensor that detects a sequence of eye-gaze positions on the document as a function of time.

METHOD AND SYSTEM FOR CLASSIFYING DOCUMENT IMAGES

A method and system are used for managing and classifying electronic document images. Each of the electronic document images is divided into an array of image segments. The method extracts image features from each of the image segments to obtain numerical coefficients for each of the image segments. The numerical coefficients are compared with each other to generate sub-codes. A classification code is determined as a combination of the sub-codes. The classification codes of a plurality of electronic document images can be stored in a database for further analysis. Based on the classification codes, similarity rates between at two document images can be determined.

METHOD AND SYSTEM FOR CLASSIFYING DOCUMENT IMAGES

A method and system are used for managing and classifying electronic document images. Each of the electronic document images is divided into an array of image segments. The method extracts image features from each of the image segments to obtain numerical coefficients for each of the image segments. The numerical coefficients are compared with each other to generate sub-codes. A classification code is determined as a combination of the sub-codes. The classification codes of a plurality of electronic document images can be stored in a database for further analysis. Based on the classification codes, similarity rates between at two document images can be determined.

Integration of an email client with hosted applications

Disclosed are various embodiments for integrating an email client with hosted applications. An email is received from an email client. An image that is a component of the email is identified and sent to an optical character recognition (OCR) service. Extracted text is received from the OCR service. A request for an action object is then sent to a connector for an application, the action object representing a potential action that could be performed with the application based on the extracted text from the OCR service. The action object is then sent to the email client, which is configured to display a prompt allowing a user to perform the action represented by the action object.

Integration of an email client with hosted applications

Disclosed are various embodiments for integrating an email client with hosted applications. An email is received from an email client. An image that is a component of the email is identified and sent to an optical character recognition (OCR) service. Extracted text is received from the OCR service. A request for an action object is then sent to a connector for an application, the action object representing a potential action that could be performed with the application based on the extracted text from the OCR service. The action object is then sent to the email client, which is configured to display a prompt allowing a user to perform the action represented by the action object.

Triage engine for document authentication

Computer systems and methods are provided for receiving a first authentication request that includes an image of an identification document. A risk value is determined using one or more information factors that correspond to the authentication request. A validation user interface that includes the image of the identification document is displayed. A risk category that corresponds to the risk value is determined using at least a first risk threshold. In accordance with a determination that the risk value corresponds to a first risk category, a visual indication that corresponds to the first risk category is displayed. In accordance with a determination that the risk value corresponds to a second risk category, a visual indication that corresponds to the second risk category is displayed.

Triage engine for document authentication

Computer systems and methods are provided for receiving a first authentication request that includes an image of an identification document. A risk value is determined using one or more information factors that correspond to the authentication request. A validation user interface that includes the image of the identification document is displayed. A risk category that corresponds to the risk value is determined using at least a first risk threshold. In accordance with a determination that the risk value corresponds to a first risk category, a visual indication that corresponds to the first risk category is displayed. In accordance with a determination that the risk value corresponds to a second risk category, a visual indication that corresponds to the second risk category is displayed.

Processing structured documents using convolutional neural networks
11550871 · 2023-01-10 · ·

Structured documents are processed using convolutional neural networks. For example, the processing can include receiving a rendered form of a structured document; mapping a grid of cells to the rendered form; assigning a respective numeric embedding to each cell in the grid, comprising, for each cell: identifying content in the structured document that corresponds to a portion of the rendered form that is mapped to the cell, mapping the identified content to a numeric embedding for the identified content, and assigning the numeric embedding for the identified content to the cell; generating a matrix representation of the structured document from the numeric embeddings assigned to the cells of the grids; and generating neural network features of the structured document by processing the matrix representation of the structured document through a subnetwork comprising one or more convolutional neural network layers.

Processing structured documents using convolutional neural networks
11550871 · 2023-01-10 · ·

Structured documents are processed using convolutional neural networks. For example, the processing can include receiving a rendered form of a structured document; mapping a grid of cells to the rendered form; assigning a respective numeric embedding to each cell in the grid, comprising, for each cell: identifying content in the structured document that corresponds to a portion of the rendered form that is mapped to the cell, mapping the identified content to a numeric embedding for the identified content, and assigning the numeric embedding for the identified content to the cell; generating a matrix representation of the structured document from the numeric embeddings assigned to the cells of the grids; and generating neural network features of the structured document by processing the matrix representation of the structured document through a subnetwork comprising one or more convolutional neural network layers.