Patent classifications
G06V30/1478
System and method for masking text within images
This disclosure relates generally to a system and method for masking text within images. Conventionally, image masking approaches have enabled masking but masking PII data which contains sensitive data is still a challenge. The present disclosure includes training and masking phase, wherein during training phase the PII label and values of the input image are captured and stored as co-ordinates in the database. During masking, the test image and the words comprised in the test image are optimized using an OCR technique. The label and value of each pairs are compared with the words comprised in the optimized test image. The comparison results in one or more matching labels and then a masking area is calculated for each matching label. The masking string is generated for each matching label based on the calculated masking area and the original text is masked with the generated string.
Deep-learning-based system and process for image recognition
Disclosed are methods and systems for using artificial intelligence (AI) for image recognition by using predefined coordinates to extract a portion of a received image, the extracted portion comprising a word to be identified having at least a first letter and a second letter; executing an image recognition protocol to identify the first letter; when the server is unable to identify the second letter, the server executes an AI model having a nodal data structure to identify the second letter based upon the identified first letter, the nodal data structure comprising a set of nodes where each node represents a letter, each node connected to at least one other node, wherein connection of a first node to a second node corresponds to a probability that a letter corresponding to the second node is used in a word subsequent to a letter corresponding to the first node.
PREPROCESSING IMAGES FOR OCR USING CHARACTER PIXEL HEIGHT ESTIMATION AND CYCLE GENERATIVE ADVERSARIAL NETWORKS FOR BETTER CHARACTER RECOGNITION
A text extraction computing method that comprises calculating an estimated character pixel height of text from a digital image. The method may scale the digital image using the estimated character pixel height and a preferred character pixel height. The method may binarizes the digital image. The method may remove distortions using a neural network trained by a cycle GAN on a set of source text images and a set of clean text images. The set of source text images and clean text images are unpaired. The source text images may be distorted images of text. Calculating the estimated character pixel height may include summarizing the rows of pixels into a horizontal projection, and determining a line-repetition period from the projection, and quantifying the portion of the line-repetition period that corresponds to the text as the estimated character pixel height. The method may extract characters from the digital image using OCR.
Preprocessing images for OCR using character pixel height estimation and cycle generative adversarial networks for better character recognition
A text extraction computing method that comprises calculating an estimated character pixel height of text from a digital image. The method may scale the digital image using the estimated character pixel height and a preferred character pixel height. The method may binarizes the digital image. The method may remove distortions using a neural network trained by a cycle GAN on a set of source text images and a set of clean text images. The set of source text images and clean text images are unpaired. The source text images may be distorted images of text. Calculating the estimated character pixel height may include summarizing the rows of pixels into a horizontal projection, and determining a line-repetition period from the projection, and quantifying the portion of the line-repetition period that corresponds to the text as the estimated character pixel height. The method may extract characters from the digital image using OCR.
Dynamically optimizing photo capture for multiple subjects
A user device detects, in a field of view of the camera, a first side of a document, and determines first information associated with the first side of the document. The user device selects a first image resolution based on the first information and captures, by the camera, a first image of the first side of the document according to the first image resolution. The user device detects, in the field of view of the camera, a second side of the document, and determines second information associated with the second side of the document. The user device selects a second image resolution based on the second information, and captures, by the camera, a second image of the second side of the document according to the second image resolution. The user device performs an action related to the first image and the second image.
Method and System for Securing User Access, Data at Rest and Sensitive Transactions Using Biometrics for Mobile Devices with Protected, Local Templates
Biometric data are obtained from biometric sensors on a stand-alone computing device, which may contain an ASIC, connected to or incorporated within it. The computing device and ASIC, in combination or individually, capture biometric samples, extract biometric features and match them to one or more locally stored, encrypted templates. The biometric matching may be enhanced by the use of an entered PIN. The biometric templates and other sensitive data at rest are encrypted using hardware elements of the computing device and ASIC, and/or a PIN hash. A stored obfuscated Password is de-obfuscated and may be released to the authentication mechanism in response to successfully decrypted templates and matching biometric samples. A different de-obfuscated password may be released to authenticate the user to a remote or local computer and to encrypt data in transit. This eliminates the need for the user to remember and enter complex passwords on the device.
Image processing method, image processing device, electronic device and storage medium
An image processing method, an image processing device, an electronic device, and a storage medium are provided. The image processing method includes: obtaining an input image, wherein the input image includes M character rows; performing global correction processing on the input image to obtain an intermediate corrected image; determining the M character row lower boundaries; determining the relative offset of all pixels in the intermediate corrected image according to the M character row lower boundaries, the first image boundary and the second image boundary of the intermediate corrected image; determining the local adjustment offset of all pixels in the intermediate corrected image according to the relative offsets of all pixels in the intermediate corrected image; and performing local adjustment on the intermediate corrected image according to the local adjustment offsets of all pixels in the intermediate corrected image to obtain the target corrected image.
High-speed OCR decode using depleted centerlines
A method for template matching can include iteratively selecting a template set of points to project over a centerline of a candidate symbol; conducting a template matching analysis; assigning a score to each template set; and selecting a template set with a highest assigned score. For example, the score can depend on proximity of the template points to a center and/or boundaries of a principal tracing path of the symbol. Additionally, one or more template sets having a top rank can be selected for a secondary analysis of proximity of the template points to a boundary of a printing of the symbol. The method can further include using the template with the highest score to interpret the candidate symbol.
ENHANCED OPTICAL CHARACTER RECOGNITION (OCR) IMAGE SEGMENTATION SYSTEM AND METHOD
Optical character recognition (OCR) based systems and methods for extracting and automatically evaluating contextual and identification information and associated metadata from an image utilizing enhanced image processing techniques and image segmentation. A unique, comprehensive integration with an account provider system and other third party systems may be utilized to automate the execution of an action associated with an online account. The system may evaluate text extracted from a captured image utilizing machine learning processing to classify an image type for the captured image, and select an optical character recognition model based on the classified image type. They system may compare a data value extracted from the recognized text for a particular data type with an associated online account data value for the particular data type to evaluate whether to automatically execute an action associated with the online account linked to the image based on the data value comparison.
DEEP-LEARNING-BASED SYSTEM AND PROCESS FOR IMAGE RECOGNITION
Disclosed are methods and systems for using artificial intelligence (AI) for image recognition by using predefined coordinates to extract a portion of a received image, the extracted portion comprising a word to be identified having at least a first letter and a second letter; executing an image recognition protocol to identify the first letter; when the server is unable to identify the second letter, the server executes an AI model having a nodal data structure to identify the second letter based upon the identified first letter, the nodal data structure comprising a set of nodes where each node represents a letter, each node connected to at least one other node, wherein connection of a first node to a second node corresponds to a probability that a letter corresponding to the second node is used in a word subsequent to a letter corresponding to the first node.