G06V30/146

ENHANCED OPTICAL CHARACTER RECOGNITION (OCR) IMAGE SEGMENTATION SYSTEM AND METHOD
20230008869 · 2023-01-12 ·

Optical character recognition (OCR) based systems and methods for extracting and automatically evaluating contextual and identification information and associated metadata from an image utilizing enhanced image processing techniques and image segmentation. A unique, comprehensive integration with an account provider system and other third party systems may be utilized to automate the execution of an action associated with an online account. The system may evaluate text extracted from a captured image utilizing machine learning processing to classify an image type for the captured image, and select an optical character recognition model based on the classified image type. They system may compare a data value extracted from the recognized text for a particular data type with an associated online account data value for the particular data type to evaluate whether to automatically execute an action associated with the online account linked to the image based on the data value comparison.

CHARACTER OFFSET DETECTION METHOD AND SYSTEM
20230360418 · 2023-11-09 ·

The present disclosure discloses a character offset detection method and system. The method includes: acquiring a text image; performing character separation based on the text image to obtain a character text region; calculating a center point of each rectangular box in the character text region to obtain a center point set; determining an optimal fitted curve based on the center point set; and analyzing character offset based on the optimal fitted curve to obtain an offset result. The present disclosure realizes detection of the character offset based on curve fitting, so that the accuracy of detection is improved.

DOCUMENT IMAGE CAPTURE

Upon placement of a camera-facing surface of a camera device on a document or upon parallel positioning of the camera-facing surface close to and over the document, images are continually captured by an image capturing sensor of the camera device. While the camera device is being raised above the document, whether the document is fully included within a captured image is detected. In response to detecting that the document is fully included within the captured image, the captured image that fully includes the document is selected as a document image.

DRUG IDENTIFICATION DEVICE, DRUG IDENTIFICATION METHOD AND PROGRAM, DRUG IDENTIFICATION SYSTEM, DRUG LOADING TABLE, ILLUMINATION DEVICE, IMAGING ASSISTANCE DEVICE, TRAINED MODEL, AND LEARNING DEVICE

A region of a drug to be identified is detected from a captured image generated by imaging the drug to be identified that is imparted with engraved mark and/or print. The region of the drug to be identified in the captured image is processed to acquire an engraved mark and print extraction image that is an extracted image of the engraved mark and/or print of the drug to be identified. The engraved mark and print extraction image is input, and a drug type of the drug to be identified is inferred to acquire a candidate of the drug type of the drug to be identified.

DRUG IDENTIFICATION DEVICE, DRUG IDENTIFICATION METHOD AND PROGRAM, DRUG IDENTIFICATION SYSTEM, DRUG LOADING TABLE, ILLUMINATION DEVICE, IMAGING ASSISTANCE DEVICE, TRAINED MODEL, AND LEARNING DEVICE

A region of a drug to be identified is detected from a captured image generated by imaging the drug to be identified that is imparted with engraved mark and/or print. The region of the drug to be identified in the captured image is processed to acquire an engraved mark and print extraction image that is an extracted image of the engraved mark and/or print of the drug to be identified. The engraved mark and print extraction image is input, and a drug type of the drug to be identified is inferred to acquire a candidate of the drug type of the drug to be identified.

Method for detecting image of esophageal cancer using hyperspectral imaging

This application provides a method for detecting images of testing object using hyperspectral imaging. Firstly, obtaining a hyperspectral imaging information according to a reference image, hereby, obtaining corresponded hyperspectral image from an input image and obtaining corresponded feature values for operating Principal components analysis to simplify feature values. Then, obtaining feature images by Convolution kernel, and then positioning an image of an object under detected by a default box and a boundary box from the feature image. By Comparing with the esophageal cancer sample image, the image of the object under detected is classifying to an esophageal cancer image or a non-esophageal cancer image. Thus, detecting an input image from the image capturing device by the convolutional neural network to judge if the input image is the esophageal cancer image for helping the doctor to interpret the image of the object under detected.

DEEP-LEARNING-BASED SYSTEM AND PROCESS FOR IMAGE RECOGNITION

Disclosed are methods and systems for using artificial intelligence (AI) for image recognition by using predefined coordinates to extract a portion of a received image, the extracted portion comprising a word to be identified having at least a first letter and a second letter; executing an image recognition protocol to identify the first letter; when the server is unable to identify the second letter, the server executes an AI model having a nodal data structure to identify the second letter based upon the identified first letter, the nodal data structure comprising a set of nodes where each node represents a letter, each node connected to at least one other node, wherein connection of a first node to a second node corresponds to a probability that a letter corresponding to the second node is used in a word subsequent to a letter corresponding to the first node.

Method and system for securing user access, data at rest and sensitive transactions using biometrics for mobile devices with protected, local templates

Biometric data are obtained from biometric sensors on a stand-alone computing device, which may contain an ASIC, connected to or incorporated within it. The computing device and ASIC, in combination or individually, capture biometric samples, extract biometric features and match them to one or more locally stored, encrypted templates. The biometric matching may be enhanced by the use of an entered PIN. The biometric templates and other sensitive data at rest are encrypted using hardware elements of the computing device and ASIC, and/or a PIN hash. A stored obfuscated Password is de-obfuscated and may be released to the authentication mechanism in response to successfully decrypted templates and matching biometric samples. A different de-obfuscated password may be released to authenticate the user to a remote or local computer and to encrypt data in transit. This eliminates the need for the user to remember and enter complex passwords on the device.

FRAMEWORK FOR DOCUMENT LAYOUT AND INFORMATION EXTRACTION

Provided herein are system, apparatus, device, method, and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for extracting data from a file. Embodiments described herein provide a framework to merge outputs of various models comprising extracted information from a file with its location information and annotated regions of interest into an output file ingestible by a database or knowledge base.

Image processing system and an image processing method
11823497 · 2023-11-21 · ·

An image processing system and an image processing method for localising recognised characters in an image. An estimation unit is configured to estimate a first location of a recognised character that has been obtained by performing character recognition of the image. A determination unit is configured to determine second locations of a plurality of connected components in the image. A comparison unit is configured to compare the first location and the second locations, to identify a connected component associated with the recognised character. An association unit is configured to associate the recognised character, the identified connected component, and the second location of the identified connected component.