G06V30/245

TEXT BORDER TOOL AND ENHANCED CORNER OPTIONS FOR BACKGROUND SHADING

Disclosed herein are various techniques for more precisely and reliably (a) positioning top and bottom border edges relative to textual content, (b) positioning left and right border edges relative to textual content, (c) positioning mixed edge borders relative to textual content, (d) positioning boundaries of a region of background shading that fall within borders of textual content, (e) positioning borders relative to textual content that spans columns, (f) positioning respective borders relative to discrete portions of textual content, (g) positioning collective borders relative to discrete, abutting portions of textual content, (h) applying stylized corner boundaries to a region of background shading, and (i) applying stylized corners to borders.

Glyph Accessibility System
20230008785 · 2023-01-12 · ·

Glyph accessibility techniques are described as implemented by a digital content processing system involving accessing glyphs and glyph alternatives. These techniques include preprocessing techniques in which a base font is used to determine similarity of glyphs within the base font to each other. Glyph metadata that describes this similarity is cached in a storage device and used during runtime to increase efficiency in locating similar glyphs in other fonts.

Preserving document design using font synthesis
11710262 · 2023-07-25 · ·

Automatic font synthesis for modifying a local font to have an appearance that is visually similar to a source font is described. A font modification system receives an electronic document including the source font together with an indication of a font descriptor for the source font. The font descriptor includes information describing various font attributes for the source font, which define a visual appearance of the source font. Using the source font descriptor, the font modification system identifies a local font that is visually similar in appearance to the source font by comparing local font descriptors to the source font descriptor. A visually similar font is then synthesized by modifying glyph outlines of the local font to achieve the visual appearance defined by the source font descriptor. The synthesized font is then used to replace the source font and output in the electronic document at the computing device.

FONT DETECTION METHOD AND SYSTEM USING ARTIFICIAL INTELLIGENCE-TRAINED NEURAL NETWORK

The present disclosure relates to a font detection method using a neural network. The font detection method using the neural network according to the present disclosure includes receiving a target text image including a text; resizing a horizontal or vertical size to a reference input size according to an aspect ratio of the input target text image; and inputting the resized target text image into a trained neural network and outputting a font of the text included in the text image, and the neural network may be trained with a unit image extracted as a unit region of the reference input size from a training image generated by synthesizing a background with the text. According to the present disclosure, fonts according to various usage examples may be effectively detected.

Method and system for converting font of Chinese character in image, computer device and medium

A method and a system for converting a font of a Chinese character in an image, a computer device and a medium are disclosed. A specific implementation of the method includes: acquiring a stroke of a to-be-converted Chinese character in the image and spatial distribution information of the stroke; and generating a Chinese character in a target font that corresponds to the to-be-converted Chinese character in the image according to the stroke of the to-be-converted Chinese character, the spatial distribution information of the stroke and standard stroke information of the target font, to replace the to-be-converted Chinese character.

Machine learning-based inference of granular font properties

A textual properties model is used to infer values for certain font properties of interest given certain text-related data, such as rendered text images. The model may be used for numerous purposes, such as aiding with document layout, identifying font families that are similar to a given font families, and generating new font families with specific desired properties. In some embodiments, the model is trained from a combination of synthetic data that is labeled with values for the font properties of interest, and partially-labeled data from existing “real-world” documents.

INSPECTION APPARATUS, CONTROL METHOD, AND INSPECTION METHOD
20230070196 · 2023-03-09 ·

An inspection apparatus selects at least one character area, in a first preview image obtained by reading and previewing a print product, sets a direction, for a character in the selected character area, registers the set direction and the character in the selected character area in association with each other, selects at least one character inspection area, in a second preview image obtained by reading and previewing a print product as an inspection target, sets a direction, for a character in the selected character inspection area, rotates the character inspection area to match the set direction, with the direction set for the character in the selected character area, performs character recognition, for the character in the rotated character inspection area, and inspects the character inspection area, based on a result of the character recognition and a result of recognizing the character in the selected character area.

Font creation apparatus, font creation method, and font creation program

There are provided a font creation apparatus, a font creation method, and a font creation program capable of generating, from small-number character images having a desired-to-be-imitated style, a complete font set for any language having the same style as the character images. A feature amount extraction unit (40) receives a character image (32) of a first font having a desired-to-be-imitated style and extracts a first feature amount of the first font of the character image (32). An estimation unit (42) estimates a transformation parameter between the extracted first feature amount and a second feature amount of a reference second font (34). A feature amount generation unit (44) generates a fourth feature amount of a second font set to be created by transforming a third feature amount of a complete reference font set (36) based on the estimated transformation parameter. A font generation unit (46) generates a complete second font set by converting the generated fourth feature amount of the second font set into an image.

Training neural networks to perform tag-based font recognition utilizing font classification
11636147 · 2023-04-25 · ·

The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.

METHOD FOR TRAINING A FONT GENERATION MODEL, METHOD FOR ESTABLISHING A FONT LIBRARY, AND DEVICE

Provided are a method for training a font generation model, a method for establishing a font library, and a device. The method for training a font generation model includes the following steps. A source-domain sample character is input into the font generation model to obtain a first target-domain generated character. The first target-domain generated character is input into a font recognition model to obtain the target adversarial loss of the font generation model. The model parameter of the font generation model is updated according to the target adversarial loss.