Patent classifications
G06K9/18
Facility walkthrough and maintenance guided by scannable tags or data
A mobile device configured to enable a user to maintain a facility includes: a display device; a network interface configured to communicate across a computer network with an external computer server to retrieve facility data; an antenna for interrogating an RFID tag; a reader configured to read a response signal generated by the RFID tag in response to the interrogating, process the response signal to extract tag information, determine whether the tag information includes information identifying one of a room within the facility or equipment within the facility; and a processor configured to determine whether the tag information includes a room identifier identifying a room within the facility or an equipment identifier identifying equipment within the facility based on the facility data, retrieve display data from the stored facility data based on the identified information, and present the display data on the display device.
Automated assessment
A system for automated assessments is provided. The system includes a processing device and a memory device. The memory device stores instructions that when executed by the processing device may result in receiving a video stream including image frames from an imaging source. Image data from one or more of the image frames can be compared with an image profile database to determine whether at least one item in one or more of the image frames is identifiable. A lookup operation can be initiated in a secondary data source to determine one or more item attributes of the one or more identified items. The video stream can be modified to add an item identifier for each of the identified items to the video stream as an annotated video. A record can be stored of the one or more item attributes of the identified items in the annotated video.
Capturing product details of purchases
Systems, methods and computer-readable media are disclosed for capturing purchase information regarding purchased items of a consumer. Upon receiving an image of a receipt (a receipt image) regarding a list of purchased items, receipt text is generated. The receipt text is processed to identify the purchased items in the receipt image. Accordingly, an iteration is begun to iterate through the item blocks of the receipt text. An item block corresponds to a discrete item in the receipt text. The processing comprises extracting textual elements from the item block and matching the textual elements to a known product. Upon matching the textual elements to a known product, the consumer inventory associated with the consumer is updated with regard to the purchase of the known product.
METHOD AND SYSTEM THAT DETERMINE THE SUITABILITY OF A DOCUMENT IMAGE FOR OPTICAL CHARACTER RECOGNITION AND OTHER IMAGE PROCESSING
The current document is directed to a computationally efficient method and system for assessing the suitability of a text-containing digital image for various types of computational image processing, including optical-character recognition. A text-containing digital image is evaluated by the disclosed methods and systems for sharpness or, in other words, for the absence of, or low levels of, noise, optical blur, and other defects and deficiencies. The sharpness-evaluation process uses computationally efficient steps, including convolution operations with small kernels to generate contour images and intensity-based evaluation of pixels within contour images for sharpness and proximity to intensity edges in order to estimate the sharpness of a text-containing digital image for image-processing purposes.
Translation display device, translation display method, and control program
A translation display device of the present invention carries out a translation process of translating text extracted from a certain image and displays translated text while a display frame rate is maintained. A translation processing section (5) carries out the translation process in a case where no image other than the certain image is being subjected to the translation process. An image movement analyzing section (7) identifies a displacement of a position of an object in a most recent image which has been most recently obtained, the displacement being measured with respect to a reference position of an object in a reference image for which the translation process most recently ended. A generated image in which translated text obtained by translating extracted text extracted from the reference image is displayed so as to be superimposed on the most recent image in accordance with (i) a position of the extracted text and (ii) information on the displacement.
SYSTEMS AND METHODS FOR RECOGNIZING SYMBOLS IN IMAGES
A computer-implemented method comprises generating a description of a character symbol from a binarized image; comparing a template for the character symbol with the description of the character symbol based on a reference description, wherein the template comprises a grid of cells, a set of local features which may be present in the grid of cells, the reference description specifying which member of the set of local features should be present or absent in the grid of cells, and a threshold of an accepted deviation with the description of the character symbol; assigning a penalty value to the description of the character symbol via a cost function when a discrepancy exists based on the comparing; selecting the template as a match candidate for the character symbol when the penalty value is below the threshold; recognizing the character symbol based on the selecting.
Writing pad with synchronized background audio and video and handwriting recognition
A stand alone low cost writing pad includes a rechargeable battery, a low capacity memory, a low power processor, a first pair of connectors and supports audio, video and digital ink capturing functionalities. The writing pad may be detached from and re-attached to a stand alone base unit using the first pair of connectors. The base unit includes another rechargeable battery, high capacity memory, high power processor, and a second pair of connectors. The base unit receives captured audio and digital ink from the writing pad via the communication pathway and the high power processor runs voice recognition and optical character recognition software on received data to generate second data. The second data is displayed on the writing pad and/or stored in the high capacity memory for future use.
Page layout determination of an image undergoing optical character recognition
A method and system is provided for identifying a page layout of an image that includes textual regions. The textual regions are to undergo optical character recognition (OCR). The system includes an input component that receives an input image that includes words around which bounding boxes have been formed and a text identifying component that groups the words into a plurality of text regions. A reading line component groups words within each of the text regions into reading lines. A text region sorting component that sorts the text regions in accordance with their reading order.
Self-learning receipt optical character recognition engine
A method for receipt processing using optical character recognition (OCR). The method includes detecting, within a receipt image, a pre-determined characteristic of the receipt, obtaining an OCR receipt template based on the pre-determined characteristic of the receipt, extracting, based on at least the OCR receipt template, a vendor attribute from the receipt image corresponding to a pre-defined field of the receipt, verifying, based on matching the vendor attribute to a vendor associated with the OCR receipt template, that the receipt is generated by the vendor and that the OCR receipt template is applicable to perform the OCR of the receipt image, and generating, in response to the verifying, a textual content of the receipt by processing the receipt image based at least on the OCR receipt template.
Evaluating image values
Images of items are evaluated. A first image of the item, having a view of two or more of its surfaces, is captured at a first time. A measurement of at least one dimension of one or more of the surfaces is computed and stored. A second image of the item, having a view of at least one of the two or more surfaces, is captured at a second time, subsequent to the first time. A measurement of the dimension is then computed and compared to the stored first measurement. The computed measurement is evaluated based on the comparison.