Patent classifications
G06V30/2276
COMPLETING TYPESET CHARACTERS USING HANDWRITTEN STROKES
A system and method for completing a character of a text of a digital document on a computing device, the computing device comprising a processor, a memory, and at least one non-transitory computer readable medium for recognizing input under control of the processor, the at least one non-transitory computer readable medium is configured to cause display (S900) of at least one typeset character of the text on a display interface of the computing device; detect a handwritten input stroke (S902) performed on the digital document in the vicinity of a typeset character; identify an first typeset character (S904) if the typeset character belongs to a list of base characters according to a language model; retrieve a predefined character version (S906) of the first typeset character from the memory; generate a hybrid character (S908) by replacing the initial typeset character by the predefined character; generate a list of character candidates (S910) with associated probabilities of recognition of the hybrid character provided by a recognition expert; select a recognized character (S912) from the character candidate list by using a language expert.
Simulated handwriting image generator
Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
WRITING RECOGNITION USING WEARABLE PRESSURE SENSING DEVICE
Writing recognition using a wearable pressure sensing device includes receiving pressure measurement data from a pressure sensor disposed upon a body part of a user. The pressure measurement data is indicative of a change in pressure of the body part due to an interaction of the body part with a medium indicative of a writing gesture by the user. A start boundary and end boundary for each of a plurality of writing symbols is detected based upon the pressure measurement data. At least one feature of the pressure measurement data associated with the plurality of writing symbols is extracted. A symbol pattern is detected based upon the extracted features, and at least one letter is detected based upon the symbol pattern. A word is detected based upon the detected at least one letter.
SIMULATED HANDWRITING IMAGE GENERATOR
Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
STROKE EXTRACTION IN FREE SPACE
An approach for extracting strokes in a free space environment is described. Boundaries are displayed in a free space environment describing at least one two-dimensional surface area. One or more language movements are extracted from the free space environment by a paired ring device and transmitted as images for processing. Haptic feedback is provided to the paired ring device in response to detecting at least one language movement occurring outside of at least one two-dimensional surface area. At least one extracted language movement is input into a character training model.
Stroke extraction in free space
An approach for stroke extraction in free space utilizing a paired ring device is provided. The approach receives one or more images transmitted from the paired ring device, wherein the one or more images are transcribed sequentially from data related to one or more movements recorded by the paired ring device, and wherein the one or more images include one or more of a plurality of vector points, a plurality of coordinates, and a plurality of dots interconnected by a plurality of lines. The approach inputs the one or more images into a character training model. The approach maps the one or more images into one or more characters. The approach transcribes the one or more characters into a digital document.
Arabic optical character recognition method using hidden markov models and decision trees
Disclosed is an Arabic optical character recognition method using Hidden Markov Models and decision trees, comprising: receiving an input image containing Arabic text, removing all diacritics from the input image by detecting a bounding box of each diacritic and comparing coordinates thereof to those of a bounding box of a text body, segmenting the input image into four layers, and conducting feature extraction on the segmented four layers, inputting results of feature extraction into a Hidden Markov Model thereby generating HMM models for representing each Arabic character, conducting iterative training of the HMM models until an overall likelihood criterion is satisfied, and inputting results of iterative training into a decision tree thereby predicting locations and the classes of the diacritics and producing final recognition results. The invention is capable of facilitating simple recognition of Arabic by utilizing writing feature thereof, and meanwhile featuring comparatively high recognition precision.
Method for recognizing handwriting on a physical surface
The invention relates to a method for recognizing handwriting on a physical surface on the basis of three-dimensional signals originating from sensors of a terminal, the method being characterized in that the signals are obtained on the basis of at least 3 different types of sensors, and in that it comprises steps of sampling, according to 3 axes and over a sliding time window, of inertial signals originating from the sensors, fusing the sampled signals into a 9-dimensional vector for each sampling period, converting the fused signals into a sequence of characteristic 9-dimensional vectors, and, when a signal characteristic of an input start has been detected, storing the sequence of characteristic vectors in a list of sequences of characteristic vectors, the preceding steps being repeated until the detection of a signal characteristic of an input end, the method furthermore comprising, on detection of said signal characteristic of an input end, a step of recognizing a word on the basis of the list of sequences of characteristic vectors created over the time window.
ARABIC SCRIPT ANALYSIS WITH CONNECTION POINTS
Systems and associated methodology are presented for Arabic handwriting synthesis including accessing character shape images of an alphabet, determining a connection point location between two or more character shapes based on a calculated right edge position and a calculated left edge position of the character shape images, extracting character features that describe language attributes and width attributes of characters of the character shape images, the language attributes including character Kashida attributes, and generating images of cursive text based on the character Kashida attributes and the width attributes.
METHOD FOR SYNTHESIZING ARABIC HANDWRITTEN TEXT
Systems and associated methodology are presented for Arabic handwriting synthesis including accessing character shape images of an alphabet, determining a connection point location between two or more character shapes based on a calculated right edge position and a calculated left edge position of the character shape images, extracting character features that describe language attributes and width attributes of characters of the character shape images, the language attributes including character Kashida attributes, and generating images of cursive text based on the character Kashida attribues and the width attribues.