Patent classifications
G06V30/182
GESTURE STROKE RECOGNITION IN TOUCH-BASED USER INTERFACE INPUT
A method for recognizing gesture strokes in user input, comprising: receiving data generated based on the user input, the data representing a stroke and comprising a plurality of ink points in a rectangular coordinate space and a plurality of timestamps associated respectively with the plurality of ink points; segmenting the plurality of ink points into a plurality of segments each corresponding to a respective sub-stroke of the stroke and comprising a respective subset of the plurality of ink points; generating a plurality of feature vectors based respectively on the plurality of segments; and applying the plurality of feature vectors as an input sequence representing the stroke to a trained stroke classifier to generate a vector of probabilities including a probability that the stroke is a non-gesture stroke and a probability that the stroke is a given gesture stroke of a set of gesture strokes.
VERTEX CHANGE DETECTION FOR ENHANCED DOCUMENT CAPTURE
Aspects of the present disclosure relate to object-based image capture. Embodiments include identifying a reference point corresponding to an object in an image of a series of images. Embodiments include comparing a position of the reference point in the image to positions of one or more corresponding reference points in one or more previous images in the series of images. Embodiments include determining a total number of images in the series of images. Embodiments include selecting, based on the comparing and the total number of images in the series of images, between: capturing the image; or declining to capture the image.
AUTOMATICALLY DETERMINING TABLE LOCATIONS AND TABLE CELL TYPES
The present disclosure involves systems, software, and computer implemented methods for automatically identifying table locations and table cell types of located tables. One example method includes receiving a request to detect tables. Features are extracted from an input spreadsheet and provided to a trained table detection model trained to predict whether worksheet cells are table cells or background cells and to a cell classification model that is trained to classify worksheet cells by cell structure type. The table detection model generates binary classifications that indicate whether cells are table cells or background cells. A contour detection process is performed on the binary classifications to generate table location information that describes at least one table boundary in the spreadsheet. The trained cell classification model generates a cell structure type classification for each cell that is included in a table boundary generated by the contour detection process.
Failure mode discovery for machine components
The failure modes of mechanical components may be determined based on text analysis. For example, a word embedding may be determined based on a plurality of text documents that include a plurality of maintenance records characterizing failure of mechanical components. A vector representation for a particular maintenance record may then be determined based on the word embedding. Based on the vector representation, the particular maintenance record may then be identified as belonging to a particular failure mode out of a set of possible failure modes.
CHARACTER RECOGNITION METHOD, CHARACTER RECOGNITION DEVICE AND NON-TRANSITORY COMPUTER READABLE MEDIUM
A character recognition method includes the following operations: determining that the image of character to be identified corresponds to a matching character of several registered characters according to several vector distances to be identified between a vector of an image of character to be identified and several vectors of several registered character images of several registered characters, and storing a matching vector distance between the vector of the image of character to be identified and a vector of the matching character by a processor; and storing a data of the matching character according to the image of character to be identified when the matching vector distance is less than a vector distance threshold by the processor.
Photograph-based assessment of dental treatments and procedures
The current document is directed to methods and systems for monitoring a dental patient's progress during a course of treatment. A three-dimensional model of the expected positions of the patient's teeth can be projected, in time, from a three-dimensional model of the patient's teeth prepared prior to beginning the treatment. A digital camera is used to take one or more two-dimensional photographs of the patient's teeth, which are input to a monitoring system. The monitoring system determines virtual-camera parameters for each two-dimensional input image with respect to the time-projected three-dimensional model, uses the determined virtual-camera parameters to generate two-dimensional images from the three-dimensional model, and then compares each input photograph to the corresponding generated two-dimensional image in order to determine how closely the three-dimensional arrangement of the patient's teeth corresponds to the time-projected three-dimensional arrangement.
HANDWRITING RECOGNITION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
A handwriting recognition method and apparatus, and an electronic device and a storage medium are provided. The method includes: acquiring a text image containing handwritten text; inputting the text image into a convolutional neural network, and extracting a CNN feature and a HOG feature of the text image; and extracting the handwritten text in text image according to the CNN feature and the HOG feature.
GENERATING TRAINING DATA FOR ESTIMATING MATERIAL PROPERTY PARAMETER OF FABRIC AND ESTIMATING MATERIAL PROPERTY PARAMETER OF FABRIC
Estimating a material property parameter of fabric involves receiving information including a three-dimensional (3D) contour shape of fabric placed over a 3D geometric object, estimating a material property parameter of the fabric used for representing drape shapes of 3D clothes made by the fabric by applying the information to a trained artificial neural network, and providing the material property parameter of the fabric.
Detecting typography elements from outlines
Systems, methods, and non-transitory computer-readable media are disclosed for determining a glyph and a font from a vector outline by applying various combinations of hash-based querying, path-descriptor matching, or anchor-point matching. For example, the disclosed systems can select a subset of candidate glyphs for a vector outline based on (i) comparing hash keys of candidate glyphs with a point-order-agnostic hash key corresponding to the vector outline and (ii) comparing a path descriptor for a primary path of the vector outline to path descriptors corresponding to candidate glyphs. By further comparing anchor points between the vector outline and the subset of candidate glyphs, the disclosed systems can select both a glyph and a font matching the vector outline.
IMAGE BASED ASSESSMENT OF DENTAL TREATMENTS MONITORING
Systems and methods for monitoring a dental patient's progress during treatment. A camera coordinate system of a virtual camera is aligned to be coincident with a world coordinate system of a 3D) model representing a expected configuration of the patient's teeth at a particular time during treatment. One or more expected 2D images generated by mapping points from the 3D model to points on an image plane of the virtual camera. One or more 2D images of the patient's teeth taken at the particular time during treatment are compared to the expected 2D images to determine whether a configuration of the patient's teeth is within a threshold level of correspondence to the expected configuration of the patient's teeth. An indication about whether the dental treatment is proceeding as expected based on whether the configuration of the patient's teeth is within the threshold level can be provided.