Patent classifications
G06F18/24765
Automatic rule prediction and generation for document classification and validation
A method is provided. The method may include, in response to electronically receiving a document, automatically classifying the document and different parts of the document, by electronically identifying a document type associated with the document and electronically tagging data associated with the different parts of the document based on classification rules. The method may further include automatically extracting the tagged data associated with the automatically classified document based on data extraction rules. The method may further include detecting first feedback associated with the classification rules and second feedback associated with the data extraction rules. The method may further include automatically generating and updating validation rules based on the identified document type, the detected first feedback, and the detected second feedback to validate the automatically classified document and the automatically tagged and extracted data.
Semi supervised target recognition in video
The technology described herein is directed to a media indexer framework including a character recognition engine that automatically detects and groups instances (or occurrences) of characters in a multi-frame animated media file. More specifically, the character recognition engine automatically detects and groups the instances (or occurrences) of the characters in the multi-frame animated media file such that each group contains images associated with a single character. The character groups are then labeled and used to train an image classification model. Once trained, the image classification model can be applied to subsequent multi-frame animated media files to automatically classifying the animated characters included therein.
CLASSIFICATION-BASED MACHINE LEARNING FRAMEWORKS TRAINED USING PARTITIONED TRAINING SETS
Various embodiments of the present invention improve the speed of training classification-based machine learning models by introducing techniques that enable efficient parallelization of such training routines while enhancing the accuracy of each parallel implementation of a training routine. For example, in some embodiments, a classification-based machine learning model is trained via executing N parallel processes each executing a portion of a training routine, where each parallel process is performed using a training set having a uniform distribution of labels associated with the classification-based machine learning model. In this way, each parallel process is more likely to update parameters of the classification-based machine learning model in accordance with a holistic representation of the training data, which in turn improves the overall accuracy of the resulting trained classification-based machine learning models while enabling parallel training of the classification-based machine learning model.
Image processing for stream of input images with enforced identity penalty
A method of improving image quality of a stream of input images is described. The stream of input images, including a current input image, is received. One or more target objects, including a first target object, are identified spatio-temporally within the stream of input images. The one or more target objects are tracked spatio-temporally within the stream of input images. The current input image is segmented into i) a foreground including the first target object, and ii) a background. The foreground is processed to have improved image quality in the current input image. Processing of the foreground further comprises processing the first target object using a same processing technique as for a prior input image of the stream of input images based on the tracking of the first target object. The background is processed differently from the foreground. An output image is generated by merging the foreground with the background.
Method and apparatus for training face fusion model and electronic device
Embodiments of the present disclosure provide a method for training a face fusion model and an electronic device. The method includes: performing a first face changing process on a user image and a template image to generate a reference template image; adjusting poses of facial features of the template image based on the reference template image to generate a first input image; performing a second face changing process on the template image to generate a second input image; inputting the first input image and the second input image into a generator of an initial face fusion model to generate a fused face area image; and inputting the fused image and the template image into a discriminator of the initial face fusion model to obtain a result, and performing backpropagation correction on the initial face fusion model based on the result to generate a face fusion model.
Data analyzing device and data analyzing method
To enable effectively narrowing down features to be generated, thereby generating effective features at a high speed, in obtaining the features from a large volume of data. A fixed rule and an additional rule are stored in advance. The fixed rule specifies a rule of a calculation operation for generating a new feature. The additional rule specifies whether to perform a calculation operation for generating the new feature on a basis of meta-information, not depending on whether the fixed rule is applicable. An objective variable is predicted from plurality of features on the basis of the fixed rule and the additional rule.
Feature extraction method, comparison system, and storage medium
The feature extraction device according to one aspect of the present disclosure comprises: a reliability determination unit that determines a degree of reliability with respect to a second region, which is a region that has been extracted as a foreground region of an image and is within a first region that has been extracted from the image as a partial region containing a recognition subject, said degree of reliability indicating the likelihood of being the recognition subject; a feature determination unit that, on the basis of the degree of reliability, uses a first feature which is a feature extracted from the first region and a second feature which is a feature extracted from the second region to determine a feature of the recognition subject; and an output unit that outputs information indicating the determined feature of the recognition subject.
Machine learning for machine-assisted data classification
Methods, apparatus, systems, computing devices, computing entities, and/or the like for employing machine learning concepts to accurately predict categories for unseen data assets, present the same to a user via a user interface for review, and assign the categories to the data assets responsive to user interaction confirming the same.
Automation rating for machine learning classification
In some embodiments, a first output is received from a first prediction network at a second prediction network. The first prediction network generates the first output from a first input. Also, a second input is received at the second prediction network that describes the first input. The second prediction network analyzes the first output and the second input and generates a second output that classifies the first output in one of a set of classifications. The first output is output with the one of the set of classifications for the second output where the second output indicates whether the first output should be reviewed when the second output is classified in a first classification in the set of classifications or not reviewed when the second output is classified in a second classification in the set of classifications.
Information processing apparatus for analyzing image data, control method for the same, and storage medium
An information processing apparatus capable of communicating with an external device includes an analysis unit configured to analyze image data and to acquire a second analysis result using a second inference model that is less accurate than a first inference model of an external device when communication with the external device is not possible, a transmission unit configured to transmit the image data to the external device when communication with the external device is possible, and an acquisition unit configured to acquire, from the external device, a first analysis result obtained by analyzing, using the first inference model, the image data transmitted to the external device by the transmission unit. The first inference model and the second inference model are generated by performing machine learning.