G06V30/245

Method of generating font database, and method of training neural network model

A method of generating a font database, and a method of training a neural network model are provided, which relate to a field of artificial intelligence, in particular to a computer vision and deep learning technology. The method of generating the font database includes: determining, by using a trained similarity comparison model, a basic font database most similar to handwriting font data of a target user in a plurality of basic font databases as a candidate font database; and adjusting, by using a trained basic font database model for generating the candidate font database, the handwriting font data of the target user, so as to obtain a target font database for the target user.

Glyph accessibility system
11809806 · 2023-11-07 · ·

Glyph accessibility techniques are described as implemented by a digital content processing system involving accessing glyphs and glyph alternatives. These techniques include preprocessing techniques in which a base font is used to determine similarity of glyphs within the base font to each other. Glyph metadata that describes this similarity is cached in a storage device and used during runtime to increase efficiency in locating similar glyphs in other fonts.

FONT ATTRIBUTE DETECTION
20230343124 · 2023-10-26 ·

Described are techniques for font attribute detection. The techniques include receiving a document having different font attributes amongst a plurality of words respectively comprised of at least one character. The techniques further include generating a dense image document from the document by setting the plurality of words to a predefined size, removing blank spaces from the document, and altering an order of characters relative to the document. The techniques further include determining characteristics of the characters in the dense image document and aggregating the characteristics for at least one word. The techniques further include annotating the at least one word with a font attribute based on the aggregated characteristics.

Image processing method for an identity document

An image processing method, for an identity document that comprises a data page, comprises acquiring a digital image of the page of data of the identity document. The method further comprises assigning a class or a super-class to the candidate identity document via automatic classification of the digital image by a machine-learning algorithm trained beforehand on a set of reference images in a training phase; processing the digital image to obtain a set of at least one intermediate image the weight of which is lower than or equal to the weight of the digital image; applying discrimination to the intermediate image using a discriminator neural network; and generating an output signal as output from the discriminator neural network, the value of which is representative of the probability that the candidate identity document is an authentic document or a fake.

IMAGE PROCESSING METHOD FOR AN IDENTITY DOCUMENT

An image processing method, for an identity document that comprises a data page, comprising comprises acquiring a digital image of the page of data of the identity document. The method further comprises assigning a class or a super-class to the candidate identity document via automatic classification of the digital image by a machine-learning algorithm trained beforehand on a set of reference images in a training phase; processing the digital image to obtain a set of at least one intermediate image the weight of which is lower than or equal to the weight of the digital image; applying discrimination to the intermediate image using a discriminator neural network; and generating an output signal as output from the discriminator neural network, the value of which is representative of the probability that the candidate identity document is an authentic document or a fake.

HANDWRITING FEEDBACK

A computer-implemented method (100) for generating feedback based on a handwritten text, comprises the steps of initializing (110) a writing instrument (10) to be used in a writing operation comprising a handwritten text and capturing and processing (120) the handwritten text to generate digital text data. The method further comprises the steps of identifying (130) at least one handwritten text attribute associated with the digital text data, comparing (140) the at least one handwritten text attribute with predefined textual feature attributes, and generating (150) a textual feature based on the compared at least one handwritten text attribute and predefined textual feature attributes. In addition, the method comprises the steps of modifying (160) the digital text data using the textual feature and generating (170) feedback to a user (U) based on the modified digital text data.

APPARATUS AND METHOD FOR RECOGNIZING FORMALIZED CHARACTER SET BASED ON WEAKLY SUPERVISED LOCALIZATION

Disclosed herein are an apparatus and a method for recognizing a formalized character set based on weakly supervised localization. The formalized character set recognition apparatus based on weakly supervised localization may include memory for storing at least one program; and a processor for executing the program, wherein the program performs recognizing one or more numerals present in a formalized character set image and a number of appearances of each of the numerals, extracting a class activation map in which a location of attention in the formalized character set image is indicated when a specific numeral is recognized, and outputting a formalized character set number in which numerals recognized based on the extracted class activation map are arranged.

Preserving Document Design Using Font Synthesis
20220172498 · 2022-06-02 · ·

Automatic font synthesis for modifying a local font to have an appearance that is visually similar to a source font is described. A font modification system receives an electronic document including the source font together with an indication of a font descriptor for the source font. The font descriptor includes information describing various font attributes for the source font, which define a visual appearance of the source font. Using the source font descriptor, the font modification system identifies a local font that is visually similar in appearance to the source font by comparing local font descriptors to the source font descriptor. A visually similar font is then synthesized by modifying glyph outlines of the local font to achieve the visual appearance defined by the source font descriptor. The synthesized font is then used to replace the source font and output in the electronic document at the computing device.

IDENTIFYING MATCHING FONTS UTILIZING DEEP LEARNING
20220083772 · 2022-03-17 ·

The present disclosure relates to systems, methods, and non-transitory computer readable media for generating and providing matching fonts by utilizing a glyph-based machine learning model. For example, the disclosed systems can generate a glyph image by arranging glyphs from a digital document according to an ordering rule. The disclosed systems can further identify target fonts as fonts that include the glyphs within the glyph image. The disclosed systems can further generate target glyph images by arranging glyphs of the target fonts according to the ordering rule. Based on the glyph image and the target glyph images, the disclosed systems can utilize a glyph-based machine learning model to generate and compare glyph image feature vectors. By comparing a glyph image feature vector with a target glyph image feature vector, the font matching system can identify one or more matching glyphs.

Preserving document design using font synthesis
11295181 · 2022-04-05 · ·

Automatic font synthesis for modifying a local font to have an appearance that is visually similar to a source font is described. A font modification system receives an electronic document including the source font together with an indication of a font descriptor for the source font. The font descriptor includes information describing various font attributes for the source font, which define a visual appearance of the source font. Using the source font descriptor, the font modification system identifies a local font that is visually similar in appearance to the source font by comparing local font descriptors to the source font descriptor. A visually similar font is then synthesized by modifying glyph outlines of the local font to achieve the visual appearance defined by the source font descriptor. The synthesized font is then used to replace the source font and output in the electronic document at the computing device.