Patent classifications
G06V30/1478
Image processing apparatus and image forming apparatus
An image processing apparatus includes a character recognition section, a translation section, an image processing section, a selection acceptance section, and a control section. The character recognition section performs character recognition processing on image data. The translation section translates an original text obtained through the character recognition processing performed by the character recognition section into a predetermined language and creates a translated text. The image processing section generates a replaced image in which a text portion of an original image shown in the image data is replaced from the original text by the translated text. The selection acceptance section accepts an instruction of selecting, as an output target, either one or both of the original image shown in the image data and the replaced image. The control section performs, in accordance with the accepted instruction, processing of outputting an output target image selected as the output target.
Predicting outcome in invasive breast cancer from collagen fiber orientation disorder features in tumor associated stroma
Embodiments discussed herein relate to accessing a digitized image associated with a patient of tissue demonstrating breast cancer pathology; segmenting a tumor region represented in the digitized image; segmenting collagen fibers represented in the tumor region; computing collagen vectors based on the segmented collagen fibers; generating an orientation co-occurrence matrix based on the collagen vectors; computing a collagen fiber orientation disorder feature based on the co-occurrence matrix; upon determining that the collagen fiber orientation feature exceeds a threshold value: generating a prognosis of the region of tissue as unlikely to experience breast cancer recurrence; upon determining that the collagen fiber orientation feature is less than or equal to the threshold value: generating a prognosis of the region of tissue as likely to experience breast cancer recurrence; classifying the patient as high-risk of recurrence or low-risk of recurrence based, at least in part, on the prognosis; and displaying the classification.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
An image processing apparatus executes a first morphology for a first binary image, to generate a second binary image, specifies a vertical line missing region based on the second binary image, executes a second morphology under a condition different from a condition in the first morphology for the second binary image, to generate a third binary image, acquires pixel information about a region corresponding to the vertical line missing region in the third binary image, and corrects a region corresponding to the vertical line missing region in the first binary image using the acquired pixel information, to generate a fourth binary image.
Method for correcting optical character recognition text position, storage medium and electronic device
The present disclosure provides a method for correcting an OCR text position, a storage medium and an electronic device. The method includes: determining a first slope of each text block in an OCR recognition result of a to-be-processed image; fitting a tilt field function in accordance with the first slope of each text block; determining an offset value of each text block in accordance with the tilt field function; and correcting a position of each text block in accordance with the offset value.
SYSTEM AND METHOD OF CHARACTER RECOGNITION USING FULLY CONVOLUTIONAL NEURAL NETWORKS WITH ATTENTION
Embodiments of the present disclosure include a method that obtains a digital image. The method includes extracting a word block from the digital image. The method includes processing the word block by evaluating a value of the word block against a dictionary. The method includes outputting a prediction equal to a common word in the dictionary when a confidence factor is greater than a predetermined threshold. The method includes processing the word block and assigning a descriptor to the word block corresponding to a property of the word block. The method includes processing the word block using the descriptor to prioritize evaluation of the word block. The method includes concatenating a first output and a second output. The method includes predicting a value of the word block.
SYSTEM AND METHOD OF CHARACTER RECOGNITION USING FULLY CONVOLUTIONAL NEURAL NETWORKS WITH ATTENTION
Embodiments of the present disclosure include a method that obtains a digital image. The method includes extracting a word block from the digital image. The method includes processing the word block by evaluating a value of the word block against a dictionary. The method includes outputting a prediction equal to a common word in the dictionary when a confidence factor is greater than a predetermined threshold. The method includes processing the word block and assigning a descriptor to the word block corresponding to a property of the word block. The method includes processing the word block using the descriptor to prioritize evaluation of the word block. The method includes concatenating a first output and a second output. The method includes predicting a value of the word block.
A SYSTEM FOR REAL-TIME AUTOMATED SEGMENTATION AND RECOGNITION OF VEHICLE'S LICENSE PLATES CHARACTERS FROM VEHICLE'S IMAGE AND A METHOD THEREOF.
The present invention discloses a system for automated vehicles license plates characters segmentation and recognition comprising an imaging processor connected to at least one image grabber module or camera. The image grabber module captures images of the vehicles and forwards it to said connected imaging processor and the imaging processor segments and recognizes the vehicles license plates character region including the region with deformed license plates characters in the captured vehicle images by involving binarization of maximally stable external regions corresponding to probable license plate region in the captured vehicle images.
System and method of character recognition using fully convolutional neural networks with attention
Embodiments of the present disclosure include a method that obtains a digital image. The method includes extracting a word block from the digital image. The method includes processing the word block by evaluating a value of the word block against a dictionary. The method includes outputting a prediction equal to a common word in the dictionary when a confidence factor is greater than a predetermined threshold. The method includes processing the word block and assigning a descriptor to the word block corresponding to a property of the word block. The method includes processing the word block using the descriptor to prioritize evaluation of the word block. The method includes concatenating a first output and a second output. The method includes predicting a value of the word block.
DEEP-LEARNING-BASED SYSTEM AND PROCESS FOR IMAGE RECOGNITION
Disclosed are methods and systems for using artificial intelligence (AI) for image recognition by using predefined coordinates to extract a portion of a received image, the extracted portion comprising a word to be identified having at least a first letter and a second letter; executing an image recognition protocol to identify the first letter; when the server is unable to identify the second letter, the server executes an AI model having a nodal data structure to identify the second letter based upon the identified first letter, the nodal data structure comprising a set of nodes where each node represents a letter, each node connected to at least one other node, wherein connection of a first node to a second node corresponds to a probability that a letter corresponding to the second node is used in a word subsequent to a letter corresponding to the first node.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND STORAGE MEDIUM
The image processing apparatus includes a reading unit configured to read a document and generate a scanned image, a dividing unit configured to analyze a distribution of constituent pixels of the scanned image and divide the scanned image based on a document component, an obtaining unit configured to obtain an inclination of a predetermined area among areas into which divided by the dividing unit, a classifying unit configured to classify the predetermined area into a predetermined area group based on the obtained inclination of the predetermined area, a setting unit configured to set a circumscribed rectangle encompassing the predetermined area included in the predetermined area group, a specifying unit configured to specify an area whose feature amount changes in the scanned image outward from the circumscribed rectangle as a document area, and a cropping unit configured to crop the specified document area as a document image.