Patent classifications
G06V30/373
HANDWRITING RECOGNITION METHOD AND APPARATUS
A method of generating handwriting information about handwriting of a user includes determining a first writing focus and a second writing focus; sequentially shooting a first local writing area, which is within a predetermined range from a first writing focus, and a second local writing area, which is within a predetermined range from a second writing focus; obtaining first handwriting from the first local writing area and second handwriting from the second local writing area; combining the first handwriting with the second handwriting; and generating the handwriting information, based on a result of the combining.
CAPTCHA techniques utilizing traceable images
Techniques are disclosed for generating, utilizing, and validating traceable image CAPTCHAs. In certain embodiments, a traceable image is displayed, and a trace of the image is analyzed to determine whether a user providing the trace is human. In certain embodiments, a computing device receives a request for an image, and in response, creates a traceable image based upon a plurality of image elements. The computing device transmits data representing the traceable image to cause a second computing device to display the traceable image via a touch-enabled display. The computing device receives a user trace input data generated responsive to a trace made at the second computing device, and determines whether the trace is within an error tolerance range of the set of coordinates associated with the traceable image. The computing device then sends a result of the determination.
SYSTEM AND METHOD FOR DIGITAL INK INTERACTIVITY
A system, method and computer program product for use in providing interactive ink from handwriting input to a computing device are provided. The computing device is connected to an input device in the form of an input surface. A user is able to provide input by applying pressure to or gesturing above the input surface using either his or her finger or an instrument such as a stylus or pen. The present system and method monitors the input strokes. The computing device further has a processor and an ink management system for recognizing the handwriting input under control of the processor. The ink management system is configured to cause display of, on a display interface of a computing device, first digital ink in accordance with first handwriting input, allocate references to ink elements of the first digital ink, map the references to corresponding recognized elements of the first handwriting input, and determine and store, in the memory of a computing device, ink objects including the references and mapped recognized elements.
Classifying handwritten math and text symbols using spatial syntactic rules, semantec connections, and ink-related information of strokes forming the symbols
The invention relates to a method implemented by a computing device for processing math and text in handwriting, comprising: identifying symbols by performing handwriting recognition on a plurality of strokes; classifying, as a first classification, first symbols as either a text symbol candidate or a math symbol candidate with a confidence score reaching a first threshold; classifying, as a second classification, second symbols other than first symbols as either a text symbol candidate or a math symbol candidate with a respective confidence score by applying predefined spatial syntactic rules; updating or confirming, as a third classification, a result of the second classification by establishing semantic connections between symbols and comparing the semantic connections with the result of the second classification; and recognising each symbol as either text or math based on a result of said third classification.
Process of handwriting recognition and related apparatus
Process, and related apparatus, that exploits psycho-physiological aspects involved in generation and perception of handwriting for directly inferring from the trace on the paper (or any other means on which the author writes by hand) the interpretation of writing, i.e. the sequence of characters that the trace is intended to represent.
Display device, image forming apparatus, and display method
A control section includes a track detecting section, a pattern determining section, and a character string display section. The track detecting section detects a track of a touch point on a touch panel by a user. The pattern determining section determines whether or not there is a match between the track of the touch point detected by the track detecting section and any of a plurality of patterns stored by a storage section. Upon the pattern determining section determining that there is a match, the character string display section reads a character string from the storage section that is associated with a pattern determined to match the track of the touch point from among the plurality of patterns. The character string display section causes pasting and display of the character string in an input region displayed by a display section.
Methods and systems for efficient automated symbol recognition using multiple clusters of symbol patterns
The current document is directed to methods and systems for identifying symbols corresponding to symbol images in a scanned-document image or other text-containing image, with the symbols corresponding to Chinese or Japanese characters, to Korean morpho-syllabic blocks, or to symbols of other languages that use a large number of symbols for writing and printing. In one implementation, the methods and systems to which the current document is directed carry out an initial processing step on one or more scanned images to identify, for each symbol image within a scanned document, a set of graphemes that match, with high frequency, symbol patterns that, in turn, match the symbol image. The set of graphemes identified for a symbol image is associated with the symbol image as a set of candidate graphemes for the symbol image. The set of candidate graphemes are then used, in one or more subsequent steps, to associate each symbol image with a most likely corresponding symbol code.
Modifying Captured Stroke Information into an Actionable Form
A computer-implemented technique is described herein that receives captured stroke information when a user enters a handwritten note using an input capture device. The technique then analyzes the captured stroke information to produce output analysis information. Based on the output analysis information, the technique modifies the captured stroke information into an actionable form that contains one or more actionable content items, while otherwise preserving the original form of the captured stroke information. The technique then presents the modified stroke information on a canvas display device. The user may subsequently activate one or more actionable content items in the modified stroke information to perform various supplemental tasks that pertain to the handwritten note. In one case, for example, the technique can recognize the presence of entity items and/or list items in the note and then reproduce them in an actionable form.
Interacting with an Assistant Component Based on Captured Stroke Information
A computer-implemented technique is described herein that receives captured stroke information when a user enters handwritten notes using an input capture device. The technique then automatically performs analysis on the captured stroke information to produce output analysis information. Based on the output analysis information, the technique uses an assistant component to identify a response to the captured stroke information and/or to identify an action to be performed. The technique then presents the response, together with the original captured stroke information. In addition, or alternatively, the technique performs the action. In one case, the response is a text-based response; that text-based response may be presented in a freeform handwriting style to give the user the impression that a virtual assistant is responding to the user's own note. In another case, the response engages the user in an interactive exercise of any type.
Symbol recognition using decision forests
The current document is directed to methods and systems for identifying symbols corresponding to symbol images in a scanned-document image or other text-containing image, with the symbols corresponding to Chinese or Japanese characters, to Korean morpho-syllabic blocks, or to symbols of other languages that use a large number of symbols for writing and printing. In one implementation, the methods and systems to which the current document is directed carry out an initial processing step on one or more scanned images to identify a set of graphemes that most likely correspond to each symbol image that occurs in the scanned document image. The graphemes are selected for a symbol image based on accumulated votes generated from symbol patterns identified as likely related to the symbol image using one or more decision forests.