Patent classifications
G06V30/287
PATTERN RECOGNITION DEVICE, PATTERN RECOGNITION METHOD, AND COMPUTER PROGRAM PRODUCT
According to an embodiment, a pattern recognition device is configured to divide an input signal into a plurality of elements, convert the divided elements into feature vectors having the same dimensionality to generate a set of feature vectors, and evaluate the set of feature vectors using a recognition dictionary including models corresponding to respective classes, to output a recognition result representing a class or a set of classes to which the input signal belongs. The models each include sub-models each corresponding to one of possible division patterns in which a signal to be classified into a class corresponding to the model can be divided into a plurality of elements. A label expressing a model including a sub-model conforming to the set of feature vectors, or a set of labels expressing a set of models including sub-models conforming to the set of feature vectors is output as the recognition result.
RECOGNITION DEVICE, RECOGNITION METHOD, AND COMPUTER PROGRAM PRODUCT
According to an embodiment, a recognition device includes a detector, a recognizer, and a matcher. The detector is configured to detect a character candidate from an input image. The recognizer is configured to generate recognition candidate from the character candidate. The matcher is configured to match the recognition candidate with a knowledge dictionary and contains modeled character strings to be recognized, and generate a matching result obtained by matching a character string presumed to be included in the input image with the dictionary. Any one of a real character code that represents a character and a virtual character code that specifies a command is assigned to an edge. The matcher gives, when shifting a state of the dictionary in accordance with an edge to which the virtual character code is assigned, a command specified by the virtual character code assigned to the edge to a command processor.
TRANSLATION APPARATUS, TRANSLATION SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
A translation apparatus includes a translation unit which translates content of a document into a different language, a history creating unit which, in translation of the content from a first language into a second language, creates history information including a correspondence between original text in the first language and translated text in the second language, an extraction unit which, in translation of the content from the second language into another language, if content (present content) of the document in the second language is present in the history information, extracts content (absent content) that is not present in the history information, and a combining unit which combines a translation result obtained by translating the present content from the second language into the other language, with a replacement result obtained by replacing the absent content from the second language to the other language based on the history information.
METHOD OF RECOGNIZING TEXT, DEVICE, STORAGE MEDIUM AND SMART DICTIONARY PEN
A method of recognizing a text, which relates to a field of an artificial intelligence technology, in particular to a field of computer vision and deep learning technology, and may be applied to optical character recognition or other applications. The method includes: acquiring a plurality of image sequences by continuously scanning a document; performing an image stitching, so as to obtain a plurality of successive frames of stitched images corresponding to the plurality of image sequences respectively, an overlapping region exists between each two successive frames of stitched images; performing a text recognition based on the plurality of successive frames of stitched images, so as to obtain a plurality of corresponding recognition results; and performing a de-duplication on the plurality of recognition results based on the overlapping region between each two successive frames of stitched images, so as to obtain a text recognition result for the document.
Information processing apparatus, control method, and recording medium storing program
An information processing apparatus includes: a determiner that determines an area including a handwritten figure from image data; a recognizer that recognizes a handwritten character from the handwritten figure; an acquirer that acquires a file name; and a file generator that generates a file with a file name based on a handwritten character when the recognizer recognizes the handwritten character based on the image data and generates a file with the file name acquired by the acquirer when the recognizer does not recognize a handwritten character.
CHARACTER ENCODING AND DECODING FOR OPTICAL CHARACTER RECOGNITION
The present disclosure provides techniques for encoding and decoding characters for optical character recognition. The techniques involve determining sets of numbers for encoding a character set where each number in a particular set of numbers for encoding a particular character is mapped to a graphical unit (e.g., radical) of the particular character. A mapping between each set of numbers in the possible encodings and the character set may be determined based the closest character already encoded. A machine learning model may be trained to perform optical character recognition using training data labeled using the set of encodings and the mappings.
FONT DETECTION METHOD AND SYSTEM USING ARTIFICIAL INTELLIGENCE-TRAINED NEURAL NETWORK
The present disclosure relates to a font detection method using a neural network. The font detection method using the neural network according to the present disclosure includes receiving a target text image including a text; resizing a horizontal or vertical size to a reference input size according to an aspect ratio of the input target text image; and inputting the resized target text image into a trained neural network and outputting a font of the text included in the text image, and the neural network may be trained with a unit image extracted as a unit region of the reference input size from a training image generated by synthesizing a background with the text. According to the present disclosure, fonts according to various usage examples may be effectively detected.
Image processing system, image processing apparatus, image processing method, and storage medium
An image processing system acquires a scanned image obtained by scanning an original, and extracts a character region that includes characters from within the scanned image. The image processing system performs conversion processing, for converting a font of a character included in the extracted character region from a first font to a second font, on the scanned image using a conversion model for which training has been performed in advance so as to convert characters of the first font in an inputted image into characters of the second font and output a converted image. Then, the image processing system executes OCR on the scanned image after the conversion processing.
Display device, display method, and computer-readable recording medium
A display device includes a circuitry configured to perform a search for a plurality of image candidates in an image transformation dictionary part, based on handwritten data, and a display configured to display the plurality of image candidates obtained by the search. At least a portion of the plurality of image candidates displayed on the display represents a different person or an object.
HANDWRITING PROCESSING METHOD, HANDWRITING PROCESSING DEVICE AND NON-TRANSITORY STORAGE MEDIUM
A handwriting processing method, a handwriting processing device and a non-transitory storage medium. The handwriting processing method includes: acquiring a handwriting point group corresponding to a stroke on a working surface of a touch device, the handwriting point group including a plurality of handwriting points arranged in sequence, and data of each handwriting point in the plurality of handwriting points including a coordinate and an action type, determining a plurality of model patterns corresponding to the plurality of handwriting points of model patterns being in one-to-one correspondence with the plurality of handwriting points; and sequentially connecting the plurality of model patterns, to determine a handwriting track for displaying corresponding to the handwriting point group.