Patent classifications
G06V30/245
Organizing and representing a collection of fonts according to visual similarity utilizing machine learning
Utilizing a visual-feature-classification model to generate font maps that efficiently and accurately organize fonts based on visual similarities. For example, extracting features from fonts of varying styles and utilize a self-organizing map (or other visual-feature-classification model) to map extracted font features to positions within font maps. Further, magnifying areas of font maps by mapping some fonts within a bounded area to positions within a higher-resolution font map. Additionally, navigating the font map to identify visually similar fonts (e.g., fonts within a threshold similarity).
Handwriting feedback
A computer-implemented method for generating feedback based on a handwritten text, comprises the steps of initializing a writing instrument to be used in a writing operation comprising a handwritten text and capturing and processing the handwritten text to generate digital text data. The method further comprises the steps of identifying at least one handwritten text attribute associated with the digital text data, comparing the at least one handwritten text attribute with predefined textual feature attributes, and generating a textual feature based on the compared at least one handwritten text attribute and predefined textual feature attributes. In addition, the method comprises the steps of modifying the digital text data using the textual feature and generating feedback to a user based on the modified digital text data.
METHOD AND APPARATUS OF INSPECTING PRINTED DOCUMENT
An information processing apparatus inspecting printed contents of a printed sheet, the information processing apparatus includes a recognition unit configured to recognize the printed contents printed on the sheet, an acquisition unit configured to acquire an attribute value from information recognized by the recognition unit, a specification unit configured to specify a time required for inspection of the printed contents of the sheet by using the attribute value acquired by the acquisition unit, and a notification unit configured to issue a notification to a user based on a result of comparison between the required time specified by the specification unit and a time limit for inspection of the printed contents.
Analyzing font similarity for presentation
A system includes a computing device that includes a memory configured to store instructions. The system also includes a processor to execute the instructions to perform operations that include receiving data representing features of a first font and data representing features of a second font. The first font and the second font are capable of representing one or more glyphs. Operations also include receiving survey-based data representing the similarity between the first and second fonts, and, training a machine learning system using the features of the first font, the features of the second font and the survey-based data that represents the similarity between the first and second fonts.
Deep learning tag-based font recognition utilizing font classification
The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
Method and apparatus for enabling text editing in a scanned document while maintaining fidelity of the appearance of the text
A computer implemented method and apparatus for enabling text editing in a scanned document while maintaining fidelity of appearance of the text. The method comprises creating a synthesized font comprising a plurality of characters using characters present in a scanned document; replacing the plurality of characters in the scanned document with characters from the plurality of characters from the synthesized font; and enabling editing of the scanned document wherein enabling editing comprises adding at least some characters from the plurality of characters of the synthesized font to the document for at least some characters added during editing.
Text border tool and enhanced corner options for background shading
Disclosed herein are various techniques for more precisely and reliably (a) positioning top and bottom border edges relative to textual content, (b) positioning left and right border edges relative to textual content, (c) positioning mixed edge borders relative to textual content, (d) positioning boundaries of a region of background shading that fall within borders of textual content, (e) positioning borders relative to textual content that spans columns, (f) positioning respective borders relative to discrete portions of textual content, (g) positioning collective borders relative to discrete, abutting portions of textual content, (h) applying stylized corner boundaries to a region of background shading, and (i) applying stylized corners to borders.
SYSTEMS AND METHODS FOR PRINTED CODE INSPECTION
This specification describes methods and systems for printed code inspection. For instance, the specification describes a computer-implemented method for printed code inspection by a printed code inspection system operating in conjunction with a production line apparatus configured to move objects along a production line comprising: receiving an image of an object to which a printed code comprising one or more printed characters should have been applied, the image having been captured when the object was located at a particular position on the production line; analysing the image to detect, based on a set of one or more character identification parameters, at least one candidate character within the image; determining, for each of the at least one candidate characters and based on a set of one or more candidate character properties, a likelihood that the candidate character is one of the printed characters of the printed code that should have been applied to the object; determining, based on the candidate characters determined as being likely to be one of the printed characters of the printed code that should have been applied to the object, whether the printed code is present and legible on the object; and outputting an indication as to whether the printed code that should have been applied to the object is present and legible on the object.
METHOD FOR TRAINING A FONT GENERATION MODEL, METHOD FOR ESTABLISHING A FONT LIBRARY, AND DEVICE
Provided are a method for training a font generation model, a method for establishing a font library, and a device. The method for training a font generation model includes the following steps: a source-domain sample character is input into the font generation model to obtain a first target-domain generated character; the first target-domain generated character and a preset target-domain sample character are input into a character classification model to obtain a first feature loss of the font generation model; the first target-domain generated character and the target-domain sample character are input into a font classification model to obtain a second feature loss of the font generation model; a target feature loss is determined according to the first feature loss and/or the second feature loss; and the model parameter of the font generation model is updated according to the target feature loss.
IMAGE DETECTION APPARATUS AND OPERATION METHOD THEREOF
An image detection apparatus includes: a display outputting an image; a memory storing one or more instructions; and a processor configured to execute the one or more instructions stored in the memory to: detect, by using a neural network, an additional information area in a first image output on the display; obtain style information of the additional information area from the additional information area; and detect, in a second image output on the display, an additional information area having style information different from the style information by using a model that has learned an additional information area having new style information generated based on the style information.