Patent classifications
G06V30/146
SEMANTIC TEMPLATE MATCHING
A system and method for field extraction including determining a key position of a key in an electronic file, isolating candidate key values based on a distance from the key position, selecting a key value from the candidate key values based on an output of a trained neural network, and extracting the key and the key value from the electronic file, regardless of a key-value structure.
Tyre sidewall imaging method
A computer implemented method is proposed for classifying one or more embossed and/or engraved markings on a sidewall of a tyre into one or more classes comprising digital image data of the sidewall of the tyre. The method comprises generating a first image channel from a first portion of the digital image data relating to a corresponding first portion of the sidewall of the tyre. Generating the first image channel comprises performing histogram equalisation on the first portion of the digital image data to generate the first image channel. The method further comprises generating a first feature map using the first image channel and applying a first classifier to the first feature map to classify said embossed and/or engraved markings into one or more first classes.
Image processing apparatus, image processing method, and storage medium
An image processing apparatus executes a first morphology for a first binary image, to generate a second binary image, specifies a vertical line missing region based on the second binary image, executes a second morphology under a condition different from a condition in the first morphology for the second binary image, to generate a third binary image, acquires pixel information about a region corresponding to the vertical line missing region in the third binary image, and corrects a region corresponding to the vertical line missing region in the first binary image using the acquired pixel information, to generate a fourth binary image.
Enhanced optical character recognition (OCR) image segmentation system and method
Optical character recognition (OCR) based systems and methods for extracting and automatically evaluating contextual and identification information and associated metadata from an image utilizing enhanced image processing techniques and image segmentation. A unique, comprehensive integration with an account provider system and other third party systems may be utilized to automate the execution of an action associated with an online account. The system may evaluate text extracted from a captured image utilizing machine learning processing to classify an image type for the captured image, and select an optical character recognition model based on the classified image type. They system may compare a data value extracted from the recognized text for a particular data type with an associated online account data value for the particular data type to evaluate whether to automatically execute an action associated with the online account linked to the image based on the data value comparison.
Computer-implemented segmented numeral character recognition and reader
Computer-implemented methods, systems and devices having segmented numeral character recognition. In an embodiment, users may take digital pictures of a seven-segment display on a sensor device. For example, a user at a remote location may use a digital camera to capture a digital image of a seven-segment display on a sensor device. Captured images of a seven-segment display may then be sent or uploaded over a network to a remote health management system. The health care management system includes a reader that processes the received images to determine sensor readings representative of the values output on the seven-segment displays of the remote sensor devices. Machine learning and OCR are used to identify numeric characters in images associated with seven-segment displays. In this way, a remote heath management system can obtain sensor readings from remote locations when users only have access to sensor devices with seven-segment displays.
SYSTEMS AND METHODS FOR PRINTED CODE INSPECTION
This specification describes methods and systems for printed code inspection. For instance, the specification describes a computer-implemented method for printed code inspection by a printed code inspection system operating in conjunction with a production line apparatus configured to move objects along a production line comprising: receiving an image of an object to which a printed code comprising one or more printed characters should have been applied, the image having been captured when the object was located at a particular position on the production line; analysing the image to detect, based on a set of one or more character identification parameters, at least one candidate character within the image; determining, for each of the at least one candidate characters and based on a set of one or more candidate character properties, a likelihood that the candidate character is one of the printed characters of the printed code that should have been applied to the object; determining, based on the candidate characters determined as being likely to be one of the printed characters of the printed code that should have been applied to the object, whether the printed code is present and legible on the object; and outputting an indication as to whether the printed code that should have been applied to the object is present and legible on the object.
SYSTEMS AND METHODS FOR PRINTED CODE INSPECTION
This specification describes methods and systems for printed code inspection. For instance, the specification describes a computer-implemented method for printed code inspection by a printed code inspection system operating in conjunction with a production line apparatus configured to move objects along a production line comprising: receiving an image of an object to which a printed code comprising one or more printed characters should have been applied, the image having been captured when the object was located at a particular position on the production line; analysing the image to detect, based on a set of one or more character identification parameters, at least one candidate character within the image; determining, for each of the at least one candidate characters and based on a set of one or more candidate character properties, a likelihood that the candidate character is one of the printed characters of the printed code that should have been applied to the object; determining, based on the candidate characters determined as being likely to be one of the printed characters of the printed code that should have been applied to the object, whether the printed code is present and legible on the object; and outputting an indication as to whether the printed code that should have been applied to the object is present and legible on the object.
Persistent feature based image rotation and candidate region of interest
Embodiments of a system and method for sorting and delivering articles in a processing facility based on image data are described. Image processing results such as rotation notation information may be included in or with an image to facilitate downstream processing such as when the routing information cannot be extracted from the image using an unattended system and the image is passed to an attended image processing system. The rotation notation information may be used to dynamically adjust the image before presenting the image via the attended image processing system.
IMAGE DEWARPING WITH CURVED DOCUMENT BOUNDARIES
An example non-transitory computer-readable medium includes instructions executable by a processor to detect boundaries of a representation of a document page in a captured image, model the boundaries of the representation of the document page as nonlinear curves, use the nonlinear curves to transform pixels of the representation of the document page into pixels of a dewarped representation of the document page, and output a dewarped image based on the dewarped representation of the document page.
Data normalization and extraction system
A data ingestion system normalizes ingested documents and extracts data based on a template that is applied to the documents. In an aspect, the system accesses a document of a document type and determines a template to apply to the document. The system normalizes the document, extracts data values from the document based at least in part on the template, and generates structured data based at least partly on the extracted data.