Patent classifications
G06V30/15
Method and apparatus for training a character detector based on weak supervision, system and medium
A method and apparatus for training a character detector based on weak supervision, a character detection system and a computer readable storage medium are provided, wherein the method includes: inputting coarse-grained annotation information of a to-be-processed object, wherein the coarse-grained annotation information including a whole bounding outline of a word, text bar or line of the to-be-processed object; dividing the whole bounding outline of the coarse-grained annotation information, to obtain a coarse bounding box of a character of the to-be-processed object; obtaining a predicted bounding box of the character of the to-be-processed object through a neural network model from the coarse-grained annotation information; and determining a fine bounding box of the character of the to-be-processed object as character-based annotation of the to-be-processed object, according to the coarse bounding box and the predicted bounding box.
Automated extraction of unstructured tables and semantic information from arbitrary documents
A Table Extractor provides various techniques for automatically delimiting and extracting tables from arbitrary documents. In various implementations, the Table extractor also generates functional relationships on those tables that are suitable for generating query responses via any of a variety of natural language processing techniques. In other words, the Table Extractor provides techniques for detecting and representing table information in a way suitable for information extraction. These techniques output relational functions on the table in the form of tuples constructed from automatically identified headers and labels and the relationships between those headers and labels and the contents of one or more cells of the table. These tuples are suitable for correlating natural language questions about a specific piece of information in the table with the rows, columns, and/or cells that contain that information.
Multi Receipt Detection
An information processing method and apparatus is provided for obtaining a captured image; detecting a character region from the captured image; performing association processing between expense type information specified from each of one or more receipts which are identified by using a detection result of the character region from the captured image and expense amount information specified from each of the one or more receipts in the captured image; and outputting an expense report obtained based on the association processing between the merchant information of each of one or more receipts and the one or more pieces of expense amount information of each of the one or more receipts.
Character detection method and apparatus
Disclosed embodiments relate to a character detection method and apparatus. In some embodiments, the method includes: using an image including an annotated word as an input to a machine learning model; selecting, based on a predicted result of characters inside an annotation region of the annotated word predicted and annotation information of the annotated word, characters for training the machine learning model from the characters inside the annotation region of the annotated word predicted; and training the machine learning model based on features of the selected characters. This implementation manner implements the full training of a machine learning model by using existing word level annotated images, to obtain a machine learning model capable of detecting characters in images, thereby reducing the costs for the training of a machine learning model capable of detecting characters in images.
Method and apparatus for detecting text
A method and apparatus for detecting text are provided. The method includes: extracting features of a to-be-detected image; predicting using a character detection network a probability of each pixel point in the to-be-detected image being a character pixel point, and position information of each pixel point relative to a bounding box of a character including the pixel point when the pixel point is the character pixel point; determining position information of bounding boxes of candidate characters based on the prediction result of the character detection network; inputting the extracted features into a character map network, converting a feature map outputted by the character map network, and generating character vectors; determining a neighboring candidate character of each candidate character, and connecting each candidate character with an associated neighboring candidate character to form a character set; and determining a character area of the to-be-detected image.
Extracting data from electronic documents
A structured data processing system includes hardware processors and a memory in communication with the hardware processors. The memory stores a data structure and an execution environment. The data structure includes an electronic document. The execution environment includes a data extraction solver configured to perform operations including identifying a particular page of the electronic document; performing an optical character recognition (OCR) on the page to determine a plurality of alphanumeric text strings on the page; determining a type of the page; determining a layout of the page; determining at least one table on the page based at least in part on the determined type of the page and the determined layout of the page; and extracting a plurality of data from the determined table on the page. The execution environment also includes a user interface module that generates a user interface that renders graphical representations of the extracted data; and a transmission module that transmits data that represents the graphical representations.
System and methods for assigning word fragments to text lines in optical character recognition-extracted data
Systems and methods for assigning word fragments to lines of text in optical character recognition (OCR) extracted data can include at least one processor obtaining a plurality of word fragments from OCR generated data associated with an image. The at least one processor can determine vertical coordinates of each of the word fragments in the image. The at least one processor can cluster the plurality of word fragments into one or more clusters of word fragments based on the vertical coordinates of the plurality of word fragments. The at least one processor can assign each word fragment of a respective cluster to a corresponding text line based on the clustering.
METHOD AND APPARATUS FOR TRAINING A CHARACTER DETECTOR BASED ON WEAK SUPERVISION, SYSTEM AND MEDIUM
A method and apparatus for training a character detector based on weak supervision, a character detection system and a computer readable storage medium are provided, wherein the method includes: inputting coarse-grained annotation information of a to-be-processed object, wherein the coarse-grained annotation information including a whole bounding outline of a word, text bar or line of the to-be-processed object; dividing the whole bounding outline of the coarse-grained annotation information, to obtain a coarse bounding box of a character of the to-be-processed object; obtaining a predicted bounding box of the character of the to-be-processed object through a neural network model from the coarse-grained annotation information; and determining a fine bounding box of the character of the to-be-processed object as character-based annotation of the to-be-processed object, according to the coarse bounding box and the predicted bounding box.
Method of digitizing and extracting meaning from graphic objects
Using a convolutional neural network, a method for digitizing and extracting meaning from graphic objects such as bar and pie charts, decomposes a chart into its sub-parts (pie and slices or bars, axes and legends) with significant tolerance to the wide range of variations in shape and relative position of pies, bars, axes and legends. A linear regression calibration allows properly reading values even when there are many OCR failures.
Systems and methods for merging word fragments in optical character recognition-extracted data
Systems and methods for merging adjacent word fragments in outputs of optical character recognition (OCR) systems can include a processor obtaining word fragments associated with OCR data generated from an image. Each word fragment can be associated with a respective text line of a plurality of text lines. The at least one processor can determine, for each pair of adjacent word fragments in a text line, a respective normalized horizontal distance between the pair of adjacent word fragments. The processor can identify one or more pairs of adjacent word fragments that are candidates for merging based on the determined normalized horizontal distances. The processor can determine that a pair of adjacent word fragments, among the one or more pairs of adjacent word fragments that are candidates for merging, matches a predefined expression of a plurality of predefined expressions, and merge that pair of adjacent word fragments into a single word.