Patent classifications
G06V30/387
SYSTEMS AND METHODS TO VERIFY VALUES INPUT VIA OPTICAL CHARACTER RECOGNITION AND SPEECH RECOGNITION
Disclosed are systems, methods, and non-transitory computer-readable medium for data input with multi-format validation. The method may include receiving data input via a microphone mounted on a user device and receiving the data input via a camera mounted on the user device. Additionally, the method may include comparing the data input via the microphone and the data input via the camera and determining whether the comparison of the data input exceeds a predetermined confidence level. Additionally, the method may include storing the data input, upon determining that the comparison of the data input exceeds the predetermined confidence level and presenting to the user a notification of validation upon determining that the comparison of the data input does not exceed the predetermined confidence level. Additionally, the method may include receiving from the user a validation of the data input based on the notification of validation and storing the data input based on the validation of the data input.
Method and apparatus for recognizing handwritten characters using federated learning
Provided is a method for recognizing handwritten characters in a terminal through federated learning. In the method, a first common prediction model for recognizing text from handwritten characters input from a user is applied, the handwritten characters are received from the user, feature values are extracted from an image including the handwritten characters, the feature values are input to the first common prediction mode, first text information is determined from an output of the first common prediction model, the first text information and a second text information received from the user for error correction of the first text information are cached, and the first common prediction model is learned using the image including the handwritten characters, the first text information, and the second text information. In this way, the terminal can determine the text from the handwritten characters input by the user, and can learn the first common prediction model through a feedback operation of the user.
SYSTEM AND METHOD TO MODIFY TRAINING CONTENT PRESENTED BY A TRAINING SYSTEM BASED ON FEEDBACK DATA
A system includes a training system configured to display first image data and a remote expert system configured to display second image data that corresponds to the first image data, receive feedback data associated with the second image data, and transmit a command to the training system based on the feedback data. The command is configured to modify the first image data presented via the training system.
SYSTEM AND METHOD OF HANDWRITING RECOGNITION IN DIAGRAMS
A system, method and computer program product for hand-drawing diagrams including text and non-text elements on a computing device are provided. The computing device has a processor and a non-transitory computer readable medium for detecting and recognizing hand-drawing diagram element input under control of the processor. Display of input diagram elements in interactive digital ink is performed on a display device associated with the computing device. One or more of the diagram elements are associated with one or more other of the diagram elements in accordance with a class and type of each diagram element. The diagram elements are re-displayed based on one or more interactions with the digital ink received and in accordance with the one or more associations.
Digital assessment user interface with editable recognized text overlay
Systems and methods are provided by which information such as text may be extracted from a captured digital image, and displayed as an editable overlay over the captured digital image in a digital user interface. One or more boundaries defining a region or regions of the captured digital image from which information is extracted may be displayed over the captured digital image, and may be selectively added, edited, or deleted, resulting in corresponding information in the editable overlay being added, edited, or deleted. Additionally, information in the editable overlay may be added, edited, or deleted directly. The extracted information may correspond to responses to a homework assignment or test depicted in the captured digital image. The extracted information may be arranged in ordered steps, with the order of the steps being editable, and individual steps being removable, addable, or otherwise editable via interaction with the user interface.
DIGITAL ASSESSMENT USER INTERFACE WITH EDITABLE RECOGNIZED TEXT OVERLAY
Systems and methods are provided by which information such as text may be extracted from a captured digital image, and displayed as an editable overlay over the captured digital image in a digital user interface. One or more boundaries defining a region or regions of the captured digital image from which information is extracted may be displayed over the captured digital image, and may be selectively added, edited, or deleted, resulting in corresponding information in the editable overlay being added, edited, or deleted. Additionally, information in the editable overlay may be added, edited, or deleted directly. The extracted information may correspond to responses to a homework assignment or test depicted in the captured digital image. The extracted information may be arranged in ordered steps, with the order of the steps being editable, and individual steps being removable, addable, or otherwise editable via interaction with the user interface.
UNMANNED AUTONOMOUS VEHICLE AND METHOD OF CONTROLLING THE SAME
An unmanned autonomous vehicle and method of controlling the same are provided. The method includes acquiring a boarding intention of a passenger who intends to board the unmanned autonomous vehicle and determining whether to stop the unmanned autonomous vehicle at a bus stop based on the acquired boarding intention of the unmanned autonomous vehicle.
INTERACTIVE METHOD FOR GENERATING STROKES WITH CHINESE INK PAINTING STYLE AND DEVICE THEREOF
An interactive method for generating strokes with Chinese ink painting style, includes steps: obtaining an image including a pattern as an image object; obtaining a delimiting operation delimiting at least one stroke sample on a pre-stored ink painting sample, obtaining a basic outline forming a preliminary basic path of a stroke to be generated and drawn by a user on the image object; correcting stroke outlines in the stroke sample to obtain accurate stroke samples as candidate stroke samples; using the candidate stroke samples as references to generate morphological sample groups; correcting the preliminary basic path to obtain an accurate basic path; selecting morphological samples best matching the accurate basic path in the morphological sample groups as final stroke samples; and mapping style features of the final stroke samples onto the accurate basic path to generate an output image with Chinese ink painting style.
STROKE PREDICTION FOR STYLIZED DRAWINGS BASED ON PRIOR STROKES AND REFERENCE IMAGE
Embodiments provide systems, methods, and computer storage media for generating stroke predictions based on prior strokes and a reference image. An interactive drawing interface can allow a user to sketch over, or with respect to, a reference image. A UI tool such as an autocomplete or workflow clone tool can access or identify a set of prior strokes and a target region, and stroke predictions can be generated using an iterative algorithm that minimizes an energy function considering stroke-to-stroke and image-patch-to-image-patch comparisons. For any particular future stroke, one or more stroke predictions may be initialized based on the set of prior strokes. Each initialized prediction can be improved by iteratively executing search and assignment steps to incrementally improve the prediction, and the best prediction can be selected and presented as a stroke prediction for the future stroke. The process can be repeated to predict any number of future strokes.
Electronic apparatus, method, and program
According to one embodiment, an electronic apparatus performs a character recognition process, uses, if a stroke in a first area of the first handwritten document and a stroke in a second area are the same, a character recognition result of the first handwritten document, and performs, if a stroke in the first area and a stroke in the second area are different, the character recognition process for the second area including the different stroke.