Patent classifications
G06V30/1478
SYSTEM AND METHOD OF CHARACTER RECOGNITION USING FULLY CONVOLUTIONAL NEURAL NETWORKS WITH ATTENTION
Embodiments of the present disclosure include a method that obtains a digital image. The method includes extracting a word block from the digital image. The method includes processing the word block by evaluating a value of the word block against a dictionary. The method includes outputting a prediction equal to a common word in the dictionary when a confidence factor is greater than a predetermined threshold. The method includes processing the word block and assigning a descriptor to the word block corresponding to a property of the word block. The method includes processing the word block using the descriptor to prioritize evaluation of the word block. The method includes concatenating a first output and a second output. The method includes predicting a value of the word block.
EXTRACTING CARD DATA FROM MULTIPLE CARDS
Extracting financial card information with relaxed alignment comprises a method to receive an image of a card, determine one or more edge finder zones in locations of the image, and identify lines in the one or more edge finder zones. The method further identifies one or more quadrilaterals formed by intersections of extrapolations of the identified lines, determines an aspect ratio of the one or more quadrilateral, and compares the determined aspect ratios of the quadrilateral to an expected aspect ratio. The method then identifies a quadrilateral that matches the expected aspect ratio and performs an optical character recognition algorithm on the rectified model. A similar method is performed on multiple cards in an image. The results of the analysis of each of the cards are compared to improve accuracy of the data.
Method and system for securing user access, data at rest and sensitive transactions using biometrics for mobile devices with protected, local templates
Biometric data are obtained from biometric sensors on a stand-alone computing device, which may contain an ASIC, connected to or incorporated within it. The computing device and ASIC, in combination or individually, capture biometric samples, extract biometric features and match them to one or more locally stored, encrypted templates. The biometric matching may be enhanced by the use of an entered PIN. The biometric templates and other sensitive data at rest are encrypted using hardware elements of the computing device and ASIC, and/or a PIN hash. A stored obfuscated PassWord is de-obfuscated and may be released to the authentication mechanism in response to successfully decrypted templates and matching biometric samples. A different de-obfuscated password may be released to authenticate the user to a remote or local computer and to encrypt data in transit. This eliminates the need for the user to remember and enter complex passwords on the device.
Information processing apparatus, information processing system, information processing method, and computer program product
An information processing apparatus includes a reading unit, an edge extraction unit, a feature point search unit, a document area size calculation unit, a user operation unit, and a multi-feed determination unit. The reading unit optically reads a document to generate image data. The edge extraction unit extracts an edge of the document from the image data. The feature point search unit searches edge data of the edge extracted by the edge extraction unit for a feature point. The document area size calculation unit calculates a document area size from the feature point found by the feature point search unit. The user operation unit is used by a user to specify a document size. The multi-feed determination unit compares the document area size calculated by the document area size calculation unit with the user-specified document size so as to determine whether multi-feed of the document has occurred.
Extracting card data from multiple cards
Extracting financial card information with relaxed alignment comprises a method to receive an image of a card, determine one or more edge finder zones in locations of the image, and identify lines in the one or more edge finder zones. The method further identifies one or more quadrilaterals formed by intersections of extrapolations of the identified lines, determines an aspect ratio of the one or more quadrilateral, and compares the determined aspect ratios of the quadrilateral to an expected aspect ratio. The method then identifies a quadrilateral that matches the expected aspect ratio and performs an optical character recognition algorithm on the rectified model. A similar method is performed on multiple cards in an image. The results of the analysis of each of the cards are compared to improve accuracy of the data.
IMAGING TERMINAL, IMAGING SENSOR TO DETERMINE DOCUMENT ORIENTATION BASED ON BAR CODE ORIENTATION AND METHODS FOR OPERATING THE SAME
Embodiments of an image reader and/or methods of operating an image reader can capture an image, identify a bar code or IBI form within the captured image, and, store or display the captured image responsive to an orientation of the bar code.
Image reader and image forming apparatus determining direction of document to be read
An image forming apparatus includes: a document reading section; a character detection section detecting, based on image data of a document obtained through reading by the document reading section, characters included in an image formed on the document; a character concentration detection section detecting concentration of the characters detected by the character detection section; a character direction detection section detecting a direction of the characters whose concentration detected by the character concentration detection section is in a preset specified concentration range; and a document direction determination section determining, based on the direction of the characters detected by the character direction detection section, a direction of the image formed on the document as a document direction, wherein the character direction detection section, upon determination that the image on the document is a monochromatic image, defines, as the specified concentration range, a concentration range higher than predefined first concentration.
Adaptive guidelines for handwriting
One embodiment provides a method, involving: receiving, at a device, handwriting input from a user; detecting, using a processor, a location of at least a part of the handwriting input; and providing, on a display device, at least one adaptive line to guide the handwriting input; wherein the at least one adaptive line is positioned based on the location of at least a part of the handwriting input. Other aspects are described and claimed.
SYSTEMS AND METHODS FOR PROCESSING ITEMS
Embodiments of a system and method for sorting and delivering articles in a processing facility based on image data are described. Image processing results such as rotation notation information may be included in or with an image to facilitate downstream processing such as when the routing information cannot be extracted from the image using an unattended system and the image is passed to an attended image processing system. The rotation notation information may be used to dynamically adjust the image before presenting the image via the attended image processing system.
SYSTEMS AND METHODS FOR AUTOMATED DOCUMENT INGESTION
Automated document ingestion (ADI) provides a comprehensive system and method to streamline document ingestion automation through developing, deploying, and monitoring machine learning models and tools. The system is designed to integrate alongside existing manual entry pipelines within a company. ADI has multiple components to accomplish each step of this task, namely document enhancements, an augmented data entry user interface, and a machine learning operations (ML Ops) pipeline.