Patent classifications
G06V30/2264
Training neural networks to perform tag-based font recognition utilizing font classification
The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
Deep learning tag-based font recognition utilizing font classification
The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
ARABIC HANDWRITING SYNTHESIS SYSTEM AND METHOD
Systems and associated methodology are presented for Arabic handwriting synthesis including accessing character shape images of an alphabet, determining a connection point location between two or more character shapes based on a calculated right edge position and a calculated left edge position of the character shape images, extracting character features that describe language attributes and width attributes of characters of the character shape images, the language attributes including character Kashida attributes, and generating images of cursive text based on the character Kashida attribues and the width attribues.
Simulated handwriting image generator
Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
Method for generating word code, method and device for recognizing codes
A method for generating word code, a method and device for recognizing codes are provided, and they are falling within the field of machine visual recognition. The method for recognizing word code includes: acquiring an image containing word codes, and recognizing target words therein; splitting the target words according to the same rule, and then recognizing all visual anomaly split sequences, and performing determination on the word codes, if the word codes are generated on the basis of an encoding mechanism designed by a system, directly reading same and performing invoking; otherwise, invoking a pre-inputted target file of the word codes according to target word meaning sequence numbers and visual anomaly sequences numbers corresponding to the word codes.
TRAINING NEURAL NETWORKS TO PERFORM TAG-BASED FONT RECOGNITION UTILIZING FONT CLASSIFICATION
The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
SIMULATED HANDWRITING IMAGE GENERATOR
Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
Method for generating word code, method and device for recognizing codes
A method for generating word code, a method and device for recognizing codes are provided, and they are falling within the field of machine visual recognition. The method for recognizing word code includes: acquiring an image containing word codes, and recognizing target words therein; splitting the target words according to the same rule, and then recognizing all visual anomaly split sequences, and performing determination on the word codes, if the word codes are generated on the basis of an encoding mechanism designed by a system, directly reading same and performing invoking; otherwise, invoking a pre-inputted target file of the word codes according to target word meaning sequence numbers and visual anomaly sequences numbers corresponding to the word codes.
SIMULATED HANDWRITING IMAGE GENERATOR
Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
DEEP LEARNING TAG-BASED FONT RECOGNITION UTILIZING FONT CLASSIFICATION
The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.