G06V30/245

METHOD OF TRAINING CYCLE GENERATIVE NETWORKS MODEL, AND METHOD OF BUILDING CHARACTER LIBRARY
20220189189 · 2022-06-16 ·

A method of training a cycle generative networks model and a method of building a character library are provided, which relate to a field of artificial intelligence, in particular to a computer vision and deep learning technology, and which may be applied to a scene such as image processing and image recognition. A specific implementation scheme includes: inputting a source domain sample character into the cycle generative networks model to obtain a first target domain generated character; calculating a character error loss and a feature loss of the cycle generative networks model by inputting the first target domain generated character and a preset target domain sample character into a character classification model; and adjusting a parameter of the cycle generative networks model according to the character error loss and the feature loss. An electronic device and a storage medium are further provided.

TRAINING METHOD FOR CHARACTER GENERATION MODEL, CHARACTER GENERATION METHOD, APPARATUS, AND MEDIUM

Provided is a training method for a character generation model, and a character generation method, apparatus and device, which relates to the technical field of artificial intelligences, particularly, the technical field of computer vision and deep learning. The specific implementation schemes are: a source domain sample word and a target domain style word are input into the character generation model to obtain a target domain generation word; the target domain generation word and a target domain sample word are input into a pre-trained character classification model to calculate a feature loss of the character generation model; and a parameter of the character generation model is adjusted according to the feature loss.

METHOD OF GENERATING FONT DATABASE, AND METHOD OF TRAINING NEURAL NETWORK MODEL
20220180650 · 2022-06-09 ·

A method of generating a font database, and a method of training a neural network model are provided, which relate to a field of artificial intelligence, in particular to a computer vision and deep learning technology. The method of generating the font database includes: determining, by using a trained similarity comparison model, a basic font database most similar to handwriting font data of a target user in a plurality of basic font databases as a candidate font database; and adjusting, by using a trained basic font database model for generating the candidate font database, the handwriting font data of the target user, so as to obtain a target font database for the target user.

TRAINING NEURAL NETWORKS TO PERFORM TAG-BASED FONT RECOGNITION UTILIZING FONT CLASSIFICATION
20220148325 · 2022-05-12 ·

The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.

Optically analyzing text strings such as domain names

Systems and methods determine whether domain names are potentially maliciously registered variants of a set of monitored domain names. A computer system can receive domain names from a feed of newly registered domain names. For each received domain name, the computer system can generate a series of images of the domain name in different fonts and/or with various distortions applied thereto. The computer system can then transform the domain name images back to text via optical character recognition. Due to the differences in fonts and/or distortions applied to the generated images of the received domain name, the optical character recognition process can produce different text strings than the originally received domain name. The converted textual domain names are then analyzed to determine whether any one is sufficiently similar to a monitored domain name, indicating that the received domain name could be a malicious variant thereof.

IMAGE PROCESSING DEVICE AND IMAGE FORMING APPARATUS CAPABLE OF DETECTING AND CORRECTING MIS-CONVERTED CHARACTER IN TEXT EXTRACTED FROM DOCUMENT IMAGE
20220141349 · 2022-05-05 · ·

An image processing device includes a storage device that previously stores a document image, a plurality of registered words, and a plurality of font characters, and a control device that functions as: a character region identifier that identifies a character region in the document image; an image acquirer that acquires an image of the character region; a text extractor that extracts a text from the image of the character region; a word identifier that identifies each of words in the text; a word determiner that determines whether each of the words is matched with one of the registered words; and a generator that generates a corrected text by replacing a target character of a non-matching word in the text with, among the font characters, a font character having a first degree of matching not lower than a first rate with the target character and a highest first degree of matching.

Identifying matching fonts utilizing deep learning
11763583 · 2023-09-19 · ·

The present disclosure relates to systems, methods, and non-transitory computer readable media for generating and providing matching fonts by utilizing a glyph-based machine learning model. For example, the disclosed systems can generate a glyph image by arranging glyphs from a digital document according to an ordering rule. The disclosed systems can further identify target fonts as fonts that include the glyphs within the glyph image. The disclosed systems can further generate target glyph images by arranging glyphs of the target fonts according to the ordering rule. Based on the glyph image and the target glyph images, the disclosed systems can utilize a glyph-based machine learning model to generate and compare glyph image feature vectors. By comparing a glyph image feature vector with a target glyph image feature vector, the font matching system can identify one or more matching glyphs.

Utilizing glyph-based machine learning models to generate matching fonts
11216658 · 2022-01-04 · ·

The present disclosure relates to systems, methods, and non-transitory computer readable media for generating and providing matching fonts by utilizing a glyph-based machine learning model. For example, the disclosed systems can generate a glyph image by arranging glyphs from a digital document according to an ordering rule. The disclosed systems can further identify target fonts as fonts that include the glyphs within the glyph image. The disclosed systems can further generate target glyph images by arranging glyphs of the target fonts according to the ordering rule. Based on the glyph image and the target glyph images, the disclosed systems can utilize a glyph-based machine learning model to generate and compare glyph image feature vectors. By comparing a glyph image feature vector with a target glyph image feature vector, the font matching system can identify one or more matching glyphs.

Preserving Document Design Using Font Synthesis
20230326104 · 2023-10-12 · ·

Automatic font synthesis for modifying a local font to have an appearance that is visually similar to a source font is described. A font modification system receives an electronic document including the source font together with an indication of a font descriptor for the source font. The font descriptor includes information describing various font attributes for the source font, which define a visual appearance of the source font. Using the source font descriptor, the font modification system identifies a local font that is visually similar in appearance to the source font by comparing local font descriptors to the source font descriptor. A visually similar font is then synthesized by modifying glyph outlines of the local font to achieve the visual appearance defined by the source font descriptor. The synthesized font is then used to replace the source font and output in the electronic document at the computing device.

Font identification from imagery

A system includes a computing device that includes a memory configured to store instructions. The system also includes a processor to execute the instructions to perform operations that include receiving an image that includes textual content in at least one font. Operations also include identifying the at least one font represented in the received image using a machine learning system. The machine learning system being trained using images representing a plurality of training fonts. A portion of the training images includes text located in the foreground and being positioned over captured background imagery.