Patent classifications
G06V30/1423
ELECTRONIC DEVICE AND HANDWRITING RECOGNITION METHOD
According to certain embodiments, an electronic device may include a display, a memory, and a processor operatively connected to the display and the memory. The processor may be configured to, while receiving user's touch input in a handwriting area of the display, the user's touch input comprising successive input stokes: output the successive input strokes in the handwriting area on the display; determine a first stroke group including some of the successive input strokes, to determine a first character corresponding to the first stroke group, to output the first stroke group in an output area adjacent to the handwriting area on the display, to determine a second stroke group including at least another input stroke received after the some of the successive input strokes, to determine a second character corresponding to the second stroke group, and to output the second stroke group in the output area, move the first stroke group to on one side of the second stroke group on the display.
Ink file searching method, apparatus, and program
An ink file output method is provided, which includes: generating M (M is an integer of 1 or more) pieces of stroke data SD on the basis of event data generated as M input devices move, respectively; generating N (N is an integer of 1 or more and M or less) kinds of logical names LN (metadata) identifying the M number of input devices; generating a metadata block associating the M pieces of stroke data SD with the N kinds of logical names LN; and writing the M pieces of stroke data SD and the metadata block to an ink file.
GESTURE STROKE RECOGNITION IN TOUCH-BASED USER INTERFACE INPUT
A method for recognizing gesture strokes in user input, comprising: receiving data generated based on the user input, the data representing a stroke and comprising a plurality of ink points in a rectangular coordinate space and a plurality of timestamps associated respectively with the plurality of ink points; segmenting the plurality of ink points into a plurality of segments each corresponding to a respective sub-stroke of the stroke and comprising a respective subset of the plurality of ink points; generating a plurality of feature vectors based respectively on the plurality of segments; and applying the plurality of feature vectors as an input sequence representing the stroke to a trained stroke classifier to generate a vector of probabilities including a probability that the stroke is a non-gesture stroke and a probability that the stroke is a given gesture stroke of a set of gesture strokes.
METHOD AND SYSTEM FOR INK DATA GENERATION, INK DATA RENDERING, INK DATA MANIPULATION AND INK DATA COMMUNICATION
A method implemented by a transmission device to communicate with multiple reception devices that respectively share a drawing area with the transmission device is provided. The transmission device transmits to the multiple reception devices vector-data ink data representative of traces of input operation detected by an input sensor of the transmission device. The method includes: (a) an ink data generation step of generating fragmented data of a stroke object, wherein the stroke object contains multiple point objects to represent a trace formed by a pointer, the fragmented data being generated per defined unit T, and generating a drawing style object; (b) a message formation step of generating messages including the drawing style object and the fragmented data; and (c) a transmission step of transmitting the messages.
DEVICES AND METHODS FOR GENERATING INPUT
Devices and methods are disclosed for generating input. In one implementation, a stylus is provided for generating writing input. The stylus includes an elongated body having a distal end, and a light source configured to project coherent light on an opposing surface adjacent the distal end. The stylus further includes at least one sensor configured to measure first reflections of the coherent light from the opposing surface while the distal end moves in contact with the opposing surface, and to measure second reflections of the coherent light from the opposing surface while the distal end moves above the opposing surface and out of contact with the opposing surface. The stylus also includes at least one processor configured to receive input from the at least one sensor and to enable determining three dimensional positions of the distal end based on the first reflections and the second reflections.
Storage Medium Storing Editing Program and Information Processing Apparatus
A non-transitory computer-readable storage medium stores an editing program including a set of program instructions for an information processing apparatus comprising a controller and an input interface. The set of program instructions, when executed by the controller, causes the information processing apparatus to perform: acquiring a plurality of strokes inputted via the input interface; calculating a distance between two strokes of the acquired plurality of strokes; in response to determining that the calculated distance is shorter than a distance threshold, recognizing the two strokes as a same item; in response to determining that the calculated distance is longer than or equal to the distance threshold, recognizing the two strokes as separate items; and changing the distance threshold based on input via the input interface.
Position detection method, position detection device, and display device
Position detection methods and systems are disclosed herein. The position detection method of detecting a position in an operation surface pointed by a pointing element includes obtaining a first taken image with the first infrared camera, obtaining a second taken image with the second infrared camera, removing a noise component from the first and second images converting the first and second taken into converted images without the noise component, forming a difference image between the first converted taken image and the second converted taken image, extracting a candidate area in which a disparity amount between the first converted taken image and the second converted taken image is within a predetermined range, detecting a tip position of the pointing element from the candidate area, and determining a pointing position of the pointing element and whether or not the pointing element had contact with the operation surface based on the detecting.
Image processing method and apparatus for smart pen, and electronic device
An image processing method and apparatus for a smart pen, and an electronic device are provided in embodiments of the present disclosure, and belong to the technical field of data processing. The method comprises: monitoring a working state of a second pressure switch provided at a pen tip of a smart pen after a first pressure switch of the smart pen is in a closed state; controlling an image collection module on the smart pen to collect a reflected infrared signal from an area where the smart pen writes; performing feature extraction processing on an original image to obtain a feature matrix corresponding to the original image; determining, based on current load status of the smart pen, the number of convolutional layers used for convolution processing in parallel convolutional layers; and adding current time information to a trajectory classification result to form a time-ordered trajectory vector.
GENERATING VISUAL FEEDBACK
A method for generating visual feedback based on a textual representation comprising obtaining and processing a textual representation, identifying at least one textual feature of the textual representation, assigning at least one feature value to the at least one textual feature, and generating visual feedback based on the textual representation. The generated visual feedback comprises at least one visual feature corresponding to the at least one textual feature. A system for generating visual feedback based on a textual representation, comprising a capturing subsystem configured to capture the textual representation, a processing subsystem configured to identify at least one textual feature and to generate visual feedback based on the textual representation, and a graphical user output configured to display the generated visual feedback. The visual feedback generated based on the textual representation comprises at least one visual feature corresponding to the at least one textual feature.
Context based annotating in an electronic presentation system
A presentation system capable of detecting one or more gestures and contacts on a touch sensitive display. The presentation system can displaying indicia of such contacts, such as when a user writes with a fingertip, and can remove or alter such indicia responsive to other gestures and contacts. The system can accurately distinguish between types of gestures detected, such as between a writing gesture and an erasing gesture, on both large and small touch sensitive displays, thereby obviating the need for a user to make additional selective inputs to transition from one type of gesture to another. The system can determine how long to keep user annotations displayed during a presentation, based on the nature of the gesture used to make the annotations and the context in which they are made.