G06V30/36

Method of processing and recognizing hand-written characters
11256946 · 2022-02-22 · ·

The present disclosure relates to a method and system of processing original handwriting input, the system and method being capable of recognize a plurality of strokes provided on the input recognition interface, the method including: determining a stroke box around each stroke; determining overlap between the stroke boxes; correlating overlapping stroke boxes to one or more characters; providing a character box around each of the one or more characters; determining overlap between character boxes; correlating overlapping character boxes to one or more words; providing a word box around each of the one or more words; provide a word margin around each of the one or more word boxes; determining overlap between each word box to determine a line; wherein each of the characters, words, or lines can be individually selected and rearranged, the system automatically adjusting spacing or placement of surrounding elements to allow for the rearrangement.

Digital-image shape recognition using tangents and change in tangents
11256948 · 2022-02-22 ·

In one aspect, a method of optical character recognition of digital character objects in digital images includes the step of obtaining a digital image. The digital images include rendering of a first object in the digital image. The first object comprises a set of sub-objects and a set of relationships between the sub-object. The method includes the step of generating a definition of a first object by defining an object outline for the first object as a set of sub-objects; defining a sub-object outline for each sub-object as a set of lines and curves; and defining each relationship between each set of connected sub-objects in terms of one or more intersections or one or more corners.

Character recognition apparatus, character recognition processing system, and non-transitory computer readable medium
09792495 · 2017-10-17 · ·

A character recognition apparatus includes a stroke extracting unit, a noise-candidate extracting unit, a generating unit, a unit, and a specifying unit. The stroke extracting unit extracts multiple strokes from a recognition target. The noise-candidate extracting unit extracts noise candidates from the strokes. The generating unit generates multiple recognition result candidates obtained by removing at least one of the noise candidates from the recognition target. The unit performs character recognition on the recognition result candidates and obtains recognition scores. The specifying unit uses the recognition scores to specify a most likely recognition result candidate from the recognition result candidates as a recognition result.

ELECTRONIC INFORMATION BOARD APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
20170293826 · 2017-10-12 ·

An electronic information board apparatus includes: a guide generating unit configured to display a handwriting region on a screen; a coordinate detecting unit configured to detect coordinates of an indication body moving in the handwriting region on the screen; an image drawing unit configured to generate a stroke image based on the coordinates and display the generated stroke image in the handwriting region on a first layer of the screen; a character recognizing unit configured to execute character recognition based on a hand-written image that is hand-written inside the handwriting region and outputs text data; and a display superimposing unit configured to display the text data acquired from the character recognizing unit at a position that is approximately the same as that of the hand-written image that is hand-written inside the handwriting region on the screen, and on a second layer of the screen different from the first layer.

Method and apparatus for augmented reality
11501504 · 2022-11-15 · ·

The disclosure provides an augmented reality system (200) including an input unit (204), a text recognition unit (206), a natural language processing unit (208), a positioning unit (210), and an output unit (212). The input unit (204) captures an image. The text recognition unit (206) identifies an information on a surface depicted in the image and generates an input data based on the information. The natural language processing unit (208) determines a context of the input data and generates at least one assistive information based on the context. The positioning unit (210) determines one or more spatial attributes based on the image and generates a positioning information based on the spatial attributes. The output unit (212) displays the assistive information based on the positioning information.

Ink Input for Browser Navigation

Techniques for ink input for browser navigation are described. Generally, ink refers to freehand input to a touch-sensing functionality and/or a functionality for sensing touchless gestures, which is interpreted as digital ink. According to various embodiments, ink input for browser navigation provides a seamless integration of an ink input canvas with a web browser graphical user interface (“GUI”) to enable intuitive input of network addresses (e.g., web addresses) via ink input.

Spatiotemporal Method for Anomaly Detection in Dictionary Learning and Sparse Signal Recognition

A method for constructing a dictionary to represent data from a training data set comprising: modeling the data as a linear combination of columns; modeling outliers in the data set via deterministic outlier vectors; formatting the training data set in matrix form for processing; defining an underlying structure in the data set; quantifying a similarity across the data; building a Laplacian matrix; using group-Lasso regularizers to succinctly represent the data; choosing scalar parameters for controlling the number of dictionary columns used to represent the data and the number of elements of the training data set identified as outliers; using BCD and PG methods on the vector-matrix-formatted data set to estimate a dictionary, corresponding expansion coefficients, and the outlier vectors; and using a length of the outlier vectors to identify outliers in the data.

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM
20170249294 · 2017-08-31 ·

An image processing device includes a handwriting renderer, an image renderer, an external image renderer, a serializer, a creator, a recognizer, and a concatenation unit. The handwriting renderer is configured to render a stroke on a first layer. The image renderer is configured to render an image on a second layer lower than the first layer. The external image renderer is configured to render an external image on a third layer lower than the second layer. The serializer is configured to convert the stroke rendered on the first layer and the images rendered on the second and third layers into text data. The creator is configured to create document data corresponding to one page based on the text data. The recognizer is configured to acquire a character string from the stroke. The concatenation unit is configured to concatenate adjacent characters on the string with an unnecessary space therebetween being deleted.

Method for generating writing data and an electronic device thereof

An apparatus and a method for generating writing data by obtaining data generation information in an electronic device are provided. A method for inputting data in the electronic device includes displaying an attribute of the writing data by detecting a first input, checking a type of the writing data by detecting a second input, and determining output writing data, and displaying the output writing data according to the attribute of the writing data. The attribute of the writing data includes at least one of a position of the writing data to generate, a length, an angle, or a vertex of a line of the writing data.

Method and system for the spotting of arbitrary words in handwritten documents
09740925 · 2017-08-22 · ·

A method and system for the spotting of keywords in a handwritten document, the method comprising the steps of inputting an image of the handwritten document, performing word segmentation on the image to obtain segmented words, performing word matching, and outputting the spotted keywords. The word matching itself consisting in the substeps of performing character segmentation on the segmented words, performing character recognition on the segmented characters, performing distance computations on the recognized characters using a Generalized Hidden Markov Model with ergodic topology to identify words based on character models and performing nonkeyword rejection using a classifier based on a combination of Gaussian Mixture Models, Hidden Markov Models and Support Vector Machines.