Patent classifications
G06V30/1912
Systems and methods for matching facial images to reference images
A facial feature matching system comprises a facial feature matching engine. A first user selection of reference facial images is received, and facial features of reference faces are characterized using the facial feature recognition engine comprising a neural network with input, hidden, and output layers. The facial features are weighted. The weighted facial features are used to identify users that have facial features similar to the weighted facial features, wherein the respective reference faces include faces different than the faces of the users. Similarity indicators are generated for the identified users. The generated respective similarity indicators are used to generate a ordering of the identified users which is rendered via the user device. A first user selection of a second user in the ordered identified users is received and the first user and the second user are enabled to communicate over an electronic communication channel.
Method of training text quality assessment model and method of determining text quality
A method of training a text quality assessment model, a method of determining text quality, an electronic device, and a storage medium are provided. The method of training the text quality assessment model includes: determining a first text satisfying a condition of being a negative sample and a second text satisfying a condition of being a positive sample from a plurality of texts based on indicators for the texts; for any text of the first text and the second text, adding a label to the text based on the condition satisfied by the text, wherein the label indicates a category of the text, and the category includes a low-quality category for the negative sample and a non-low-quality category for the positive sample; and constituting a training set by the first text having a label and the second text having a label, to train the text quality assessment model.
CHARACTER RECOGNITION DEVICE, CHARACTER RECOGNITION METHOD, AND PROGRAM
A character recognition device of an embodiment includes a first score calculation unit, a character region estimation unit, a second score calculation unit, and a selection unit. The first score calculation unit calculates a first score indicating a likelihood of a character string, or the first score for each of a plurality of candidate character strings which are candidates for character strings included in an input image. The character region estimation unit estimates a region corresponding to each character included in the candidate character string among regions of the input image. The second score calculation unit calculates a second score indicating a consistency of characters included in the candidate character string on the basis of the estimated region. The selection unit selects one or more character strings from among the plurality of candidate character strings on the basis of the calculated first score and the calculated second score.
Wine label recognition method, wine information management method and apparatus, device, and storage medium
A wine label recognition method, a wine information management method and apparatus, a computer device, and a computer-readable storage medium are provided. The method includes: obtaining a wine image, and performing optical character recognition (OCR) on the wine image in a preset OCR manner, to obtain text included in the wine image (S21); performing deep learning recognition on the wine image in a preset deep learning recognition manner, to obtain an image feature included in the wine image (S22); and sifting out a target wine label matching the text and the image feature from a preset wine label database according to the text and the image feature, and using the target wine label as a wine label corresponding to the wine image (S33). Advantages of deep learning and OCR are fully utilized thereby improving accuracy and efficiency of wine label recognition and improving automation efficiency of wine information management.
Threshold calculation system, threshold calculation method, and computer program
A threshold calculation system, includes: a first acquisition unit that obtains a matching information that is used for matching of a biological body; a second acquisition unit that obtains an attribute information indicating an attribute of the biological body; a storage unit that stores the matching information and the attribute information for each biological body; a sampling unit that extracts, as sample data, a plurality of matching informations from the storage unit, on the basis of a predetermined condition about the attribute information; a population estimation unit that estimates a population from the sample data; and a threshold calculation unit that calculates a threshold related to the matching information, on the basis of a distribution of the estimated population. According to such a threshold calculation system, it is possible to properly calculate the threshold related to biometric authentication.
Digital quality control using computer visioning with deep learning
Implementations include receiving sample data, the sample data being generated as digital data representative of a sample of the product, providing a set of features by processing the sample data through multiple layers of a residual network, a first layer of the residual network identifying one or more features of the sample data, and a second layer of the residual network receiving the one or more features of the first layer, and identifying one or more additional features, processing the set of features using a CNN to identify a set of regions, and at least one object in a region of the set of regions, and determine a type of the at least one object, and selectively issuing an alert at least partially based on the type of the at least one object, the alert indicating contamination within the sample of the product.
MATCHING SYSTEM FOR IMAGES AND TEXT DESCRIPTIONS IN SPECIFICATIONS
Provided is a matching system for images and text descriptions in a specification. The matching system includes an image-and-text recognition device, receiving a specification and recognizing image blocks and text blocks thereon, the image block having corresponding covering range; and a preference value calculation device, assigning preference value to each of the text blocks according to positional relationship between the above-mentioned text block and the above-mentioned image block, and the contents of the above-mentioned text block, for matching the image blocks and the text blocks.
Informative user interface for document recognizer training
A method includes receiving, from a user device associated with a user, a plurality of annotated documents. Each respective annotated document includes one or more fields and each respective field labeled by a respective annotation. The method includes, for a threshold number of iterations, randomly selecting a respective subset of annotated documents from the plurality of annotated documents; training a respective model on the respective subset of annotated documents; and generating, using the plurality of annotated documents not selected for the respective subset of annotated documents, a respective evaluation of the respective model. The method also includes providing, to the user device, each respective evaluation.
Identifying provenance information of a data item generated by a generative machine learning model
Metadata may be identified for text generated by a generative machine learning model. A text is obtained and a weighting scheme determine for performing similarity analysis. Different similarity analysis techniques are performed that compare the text with representations of texts in the training data set for the generative machine learning model. Final similarity scores are generated that combine the different similarity analysis techniques according to the weighting scheme and are used to select metadata to provide that is relevant to the text.