Patent classifications
G06V30/287
Method and system for converting font of Chinese character in image, computer device and medium
A method and a system for converting a font of a Chinese character in an image, a computer device and a medium are disclosed. A specific implementation of the method includes: acquiring a stroke of a to-be-converted Chinese character in the image and spatial distribution information of the stroke; and generating a Chinese character in a target font that corresponds to the to-be-converted Chinese character in the image according to the stroke of the to-be-converted Chinese character, the spatial distribution information of the stroke and standard stroke information of the target font, to replace the to-be-converted Chinese character.
Information processing system for obtaining read data of handwritten characters, training a model based on the characters, and producing a font for printing using the model
An information processing system acquires, using a reading device, a read image from an original on which a handwritten character is written; acquires, based on the read image, a partial image that is a partial region of the read image and a binarized image that expresses the partial image by two tones; performs learning of a learning model based on learning data that uses the partial image as a correct answer image and the binarized image as an input image; acquires print data including a font character; generates conversion image data including a gradation character obtained by inputting the font character to the learning model; and causes an image forming device to form an image based on the generated conversion image data.
Processing irregularly arranged characters
Aspects of the present disclosure relate to processing irregularly arranged characters. An image is received. An irregularly arranged character within the image is detected. A direction of the irregularly arranged character is modified to a proper direction to obtain a properly oriented character. The properly oriented character is recognized to obtain a first identified character. The image is then rebuilt by replacing the irregularly arranged character with the first identified character, the first identified character in a machine-encoded format.
Image processing apparatus and non-transitory computer readable medium
An image processing apparatus includes an acquisition unit that acquires an image; and a modifying unit that modifies the image acquired by the acquisition unit by turning an intermittent line different from a line that constitutes a character into a mark by using machine learning in a stage before the image is classified into the character and a mark by a classifying unit.
METHOD AND APPARATUS FOR ACQUIRING INFORMATION, ELECTRONIC DEVICE AND STORAGE MEDIUM
Disclosed are a method for acquiring information. The method includes: acquiring a file to be processed and an information type; recognizing at least one piece of candidate information related to the information type from the file to be processed; determining a target recognition feature and a semantic feature of each piece of candidate information, the target recognition feature is configured to describe a matching condition between the each piece of candidate information and the information type; and determining target information from the at least one piece of candidate information based on the target recognition feature and the semantic feature.
TEXT RECOGNITION IN IMAGE
According to implementations of the subject matter described herein, there is provided a solution for text recognition in an image. In this solution, a target text line area, which is expected to include a text to be recognized, is determined from an image. Probability distribution information of a character model element(s) present in the target text line area is determined using a single character model. The single character model is trained based on training text line areas and respective ground-truth texts in the training text line areas. Texts in the training text line areas are arranged in different orientations, and/or the ground-truth texts comprise texts are related to various languages (e.g., texts related to a Latin and an Eastern languages). The text in the target text line area can be determined based on the determined probability distribution information. The single character model enables more efficient and convenient text recognition.
PRINTING SYSTEM AND PRINTING DEVICE
The printing system includes a handwritten character data extraction unit, a sample character data retrieval unit, a determination unit, a character practice worksheet creating unit, and a print control unit. The determination unit determines whether the matching ratio between a handwritten character extracted by the handwritten character data extraction unit and a sample character retrieved by the sample character data retrieval unit is equal to a first ratio or lower. If the determination unit determines that the matching ratio is equal to the first ratio or lower, the character practice worksheet creating unit creates a character practice worksheet containing the sample character that matches the handwritten character and a blank cell for handwriting practice formed next to the sample character. The print control unit controls the printing unit to print the character practice worksheet created by the character practice worksheet creating unit on paper.
METHOD AND APPARATUS FOR CHARACTER SELECTION BASED ON CHARACTER RECOGNITION, AND TERMINAL DEVICE
Embodiments of this application are applicable to the field of artificial intelligence technologies, and provide a method and an apparatus for character selection based on character recognition, and a terminal device. The method includes: obtaining a connectionist temporal classification sequence corresponding to text content in an original picture; calculating character coordinates of each character in the connectionist temporal classification sequence; mapping the character coordinates of each character to the original picture, to obtain target coordinates of each character in the original picture; and generating a character selection control in the original picture based on the target coordinates. The character selection control is used to indicate a user to select a character in the original picture. By using the foregoing method, when the user manually selects the character, precision of positioning the character can be improved, and efficiency and accuracy of manually selecting the character can be improved.
IMAGE PROCESSING APPARATUS
An image processing apparatus includes an input section for inputting image data, and an image processing section for discriminating a marking area out of image data and generating image data of fill-in-blank questions with the marking area converted to a blank answer field. For generation of the image data of fill-in-blank questions, the image processing section recognizes a character count of characters present in the marking area, determines, as an answer-field character count, a character count resulting from adding a margin number to the character count of the marking area, and changes a size of the answer field to a size adapted to the answer-field character count.
Optical character recognition method
The optical character recognition method applies a first OCR engine to provide an identification of characters of at least a first type of characters and zones of at least a second type of characters in the character string image. A second OCR engine is applied on the zones of the at least second type of characters to provide an identification of characters of a second type of characters. The characters identified by the first OCR engine and by the second OCR engine are in a further step combined to obtain the identification of the characters of the character string image.