G06V30/19133

SYSTEM AND METHOD FOR GENERATING BEST POTENTIAL RECTIFIED DATA BASED ON PAST RECORDINGS OF DATA
20230237822 · 2023-07-27 · ·

Various methods, apparatuses/systems, and media for data processing are disclosed. A processor receives a digital document; applies an optical character recognition (OCR) algorithm on said received digital document by utilizing an OCR tool; identifies defective data extracted by the OCR tool resulted from relatively inferior image quality of the received digital document; implements an auto rectification algorithm on the identified defective data; automatically generates, in response to implementing the auto rectification algorithm, corresponding auto-rectified data for each identified defective data; records the defective data and corresponding auto-rectified data at a field level; receives user input data on said recorded auto-rectified data; determines whether the auto-rectified data is correct or not; and populates, based on determining that the auto-rectified data is correct, a machine learning model with said received user input data to be utilized for subsequently received digital document.

Phrase recognition model for autonomous vehicles

Aspects of the disclosure relate to training and using a phrase recognition model to identify phrases in images. As an example, a selected phrase list may include a plurality of phrases is received. Each phrase of the plurality of phrases includes text. An initial plurality of images may be received. A training image set may be selected from the initial plurality of images by identifying the phrase-containing images that include one or more phrases from the selected phrase list. Each given phrase-containing image of the training image set may be labeled with information identifying the one or more phrases from the selected phrase list included in the given phrase-containing images. The model may be trained based on the training image set such that the model is configured to, in response to receiving an input image, output data indicating whether a phrase of the plurality of phrases is included in the input image.

METHOD AND APPARATUS FOR INTELLIGENT PHARMACOVIGILANCE PLATFORM
20220415467 · 2022-12-29 ·

Disclosed herein are a method and apparatus for providing a pharmacovigilance (PV) platform, wherein a method for operating a server may include: receiving input data from a user device; generating at least one command set from the input data by using a first artificial intelligence model that is selected to process the input data; determining whether or not a user that provides the input data has an authority to execute the at least one command set; generating a result of a task, when the user has the authority, by using a second artificial intelligence model that is selected to perform the task according to the at least one command set; generating output data that displays the result of the task by using a visualization module that is selected to visualize the result of the task; and transmitting the output data to the user device.

Utilizing machine learning models, position based extraction, and automated data labeling to process image-based documents

A device may receive image data that includes an image of a document and lexicon data identifying a lexicon, and may perform an extraction technique on the image data to identify at least one field in the document. The device may utilize form segmentation to automatically generate label data identifying labels for the image data, and may process the image data, the label data, and data identifying the at least one field, with a first model, to identify visual features. The device may process the image data and the visual features, with a second model, to identify sequences of characters, and may process the image data and the sequences of characters, with a third model, to identify strings of characters. The device may compare the lexicon data and the strings of characters to generate verified strings of characters that may be utilized to generate a digitized document.

Manual curation tool for map data using aggregated overhead views

Examples disclosed herein may involve (i) obtaining a first layer of map data associated with sensor data capturing a geographical area, the first layer of map data comprising an aggregated overhead-view image of the geographical area, where the aggregated overhead-view image is generated from aggregated pixel values from a plurality of images associated with the geographical area, (ii) obtaining a second layer of map data, the second layer of map data comprising label data for the geographical area derived from the aggregated overhead-view image of the geographical area, and (iii) causing the first layer of map data and the second layer of map data to be presented to a user for curation of the label data.

Collaborative text detection and text recognition
11481823 · 2022-10-25 · ·

Described are approaches for assigning tasks between machine resources (e.g., AI task performers, AI task validators), human resources (e.g., task performers, task validators), and/or other smart systems to facilitate collaborative text detection, text recognition, and text retrieval in order to optimize system performance along a variety of different selection criteria specifying various performant dimensions, including, but not limited to improving system efficiency, reducing task performer and/or task validator idle time, improving triage outcomes, reducing data processing loads, maintaining client confidentiality, etc., that may be associated with one or more customers.

COLLABORATIVE TEXT DETECTION AND TEXT RECOGNITION
20230125696 · 2023-04-27 ·

Described are approaches for assigning tasks between machine resources (e.g., AI task performers, AI task validators), human resources (e.g., task performers, task validators), and/or other smart systems to facilitate collaborative text detection, text recognition, and text retrieval in order to optimize system performance along a variety of different selection criteria specifying various performant dimensions, including, but not limited to improving system efficiency, reducing task performer and/or task validator idle time, improving triage outcomes, reducing data processing loads, maintaining client confidentiality, etc., that may be associated with one or more customers.

Similarity search engine for a digital visual object

The present invention provides a similarity search engine for a digital visual object (i.e., a digital image that represents a design, graphics, logo, symbols, words, or any combination thereof). The similarity search engine is based on a method that consists of conducting several independent search queries, thus each query examining a different aspect of similarity.

Computer system and method for detecting, extracting, weighing, benchmarking, scoring, reporting and capitalizing on complex risks found in buy/sell transactional agreements, financing agreements and research documents
11688017 · 2023-06-27 ·

Computer-implemented systems and methods enhance a user's sophistication as she/he reviews complex information sources using specialized detective tools provided by a user interface of the computer system. The specialized investigative inquiries are stored in a database and are particularly tailored a priori by a subject-matter content designer for the type of documents being reviewed for risk and opportunity. The investigative scripts are organized into to a path of risk-related subjects or topics, and within each path of subjects/topics the investigative scripts are organized into a specialized inquiry or flow chart.

ONLINE, INCREMENTAL REAL-TIME LEARNING FOR TAGGING AND LABELING DATA STREAMS FOR DEEP NEURAL NETWORKS AND NEURAL NETWORK APPLICATIONS

Today, artificial neural networks are trained on large sets of manually tagged images. Generally, for better training, the training data should be as large as possible. Unfortunately, manually tagging images is time consuming and susceptible to error, making it difficult to produce the large sets of tagged data used to train artificial neural networks. To address this problem, the inventors have developed a smart tagging utility that uses a feature extraction unit and a fast-learning classifier to learn tags and tag images automatically, reducing the time to tag large sets of data. The feature extraction unit and fast-learning classifiers can be implemented as artificial neural networks that associate a label with features extracted from an image and tag similar features from the image or other images with the same label. Moreover, the smart tagging system can learn from user adjustment to its proposed tagging. This reduces tagging time and errors.