Patent classifications
G06V10/7796
AUTOMATED CLASSIFICATION BASED ON PHOTO-REALISTIC IMAGE/MODEL MAPPINGS
Techniques are provided for increasing the accuracy of automated classifications produced by a machine learning engine. Specifically, the classification produced by a machine learning engine for one photo-realistic image is adjusted based on the classifications produced by the machine learning engine for other photo-realistic images that correspond to the same portion of a 3D model that has been generated based on the photo-realistic images. Techniques are also provided for using the classifications of the photo-realistic images that were used to create a 3D model to automatically classify portions of the 3D model. The classifications assigned to the various portions of the 3D model in this manner may also be used as a factor for automatically segmenting the 3D model.
Controlling agents over long time scales using temporal value transport
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network system used to control an agent interacting with an environment to perform a specified task. One of the methods includes causing the agent to perform a task episode in which the agent attempts to perform the specified task; for each of one or more particular time steps in the sequence: generating a modified reward for the particular time step from (i) the actual reward at the time step and (ii) value predictions at one or more time steps that are more than a threshold number of time steps after the particular time step in the sequence; and training, through reinforcement learning, the neural network system using at least the modified rewards for the particular time steps.
Obtaining patterns for surfaces of objects
A method, computer system and computer-readable medium for determining a surface pattern for a target object using an evolutionary algorithm such as a genetic algorithm, a parameterized texture-generating function, a 3D renderer for rendering images of a 3D model of the target object with a texture obtained from the parameterized texture generating function, and an object recognition model to process the images and predict whether or not the image contains an object of the target object's type or category. Sets of parameters are generated using the evolutionary algorithm and the accuracy of the object recognition model's prediction of the images with the 3D model textured according to each set of parameters is used to determine a fitness score, by which sets of parameters are scored for the purpose of obtaining future further generations of sets of parameters, such as by genetic algorithm operations such as mutation and crossover operations. The surface pattern is obtained based on the images of the 3D model rendered with a surface texture generated according to a high-scoring set of parameters.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
It is aimed to facilitate obtaining of a large number of pieces of data for learning that are necessary to obtain a good-quality learning result.
A feature value of a first dataset is compared with feature values of a predetermined number of second datasets. A determination as to whether or not each of the predetermined number of second datasets is a dataset usable together with the first dataset is made on the basis of the result of the comparison. For example, the determination is made referring to lacking data information associated with the first dataset. For example, information regarding a second dataset having been determined to be the dataset usable together with the first dataset is presented.
Method and device for facial image recognition
A method for facial image recognition is provided. A plurality of original facial images are received. A plurality of standard facial images corresponding to the original facial images are generated through a standard face generation model. A recognition model is trained by using the original facial images and the standard facial images. The recognition model is tested by using the original facial image test set and a standard facial image test set until the recognition model recognizes that the first accuracy rate of the original facial image test set is higher than a first threshold value and the second accuracy rate of the standard facial image test set is higher than a second threshold value. The original facial image test set is composed of the original facial images obtained by sampling, and the standard facial image test set is composed of the standard facial images obtained by sampling.
Systems and methods for human mesh recovery
Human mesh model recovery may utilize prior knowledge of the hierarchical structural correlation between different parts of a human body. Such structural correlation may be between a root kinematic chain of the human body and a head or limb kinematic chain of the human body. Shape and/or pose parameters relating to the human mesh model may be determined by first determining the parameters associated with the root kinematic chain and then using those parameters to predict the parameters associated with the head or limb kinematic chain. Such a task can be accomplished using a system comprising one or more processors and one or more storage devices storing instructions that, when executed by the one or more processors, cause the one or more processors to implement one or more neural networks trained to perform functions related to the task.
System and method for interactively and iteratively developing algorithms for detection of biological structures in biological samples
A method for categorizing biological structure of interest (BSOI) in digitized images of biological tissues comprises a stage of identifying BSOIs in digitized images and further comprises presenting an image from the plurality of images that comprises at least one BSOI with high level of entropy to a user, receiving from the user input indicative of a category to be associated with the BSOI that had the high level of entropy and updating the cell categories classifier according to the category of the BSOI provided by the user.
Method for assessing oral health using a mobile device
A method for remotely assessing oral health of a user of a mobile device by obtaining, using the mobile device (40), at least one digital image (1) of said user's (30) oral cavity (31) and additional non-image data (2) comprising anamnestic information about the user (30). The digital image (1) is processed both using a statistical object detection algorithm (20) to extract at least one local visual feature (3) corresponding to a medical finding related to a sub-region of said user's oral cavity (31); and also using a statistical image recognition algorithm (21) to extract at least one global classification label (4) corresponding to a medical finding related to said user's oral cavity (31) as a whole. An assessment (10) of the oral health of said user (30) is determined based on the local visual feature(s) (3), the global classification label(s) (4) and the non-image data (2).
Measuring confidence in deep neural networks
A distribution of a plurality of predictions generated by a deep neural network using sensor data is calculated, and the deep neural network includes a plurality of neurons. At least one of a measurement or a classification corresponding to an object is determined based on the distribution. The deep neural network generates each prediction of the plurality of predictions with a different number of neurons.
Dataset Quality for Synthetic Data Generation in Computer-Based Reasoning Systems
Techniques for synthetic data generation in computer-based reasoning systems are discussed and include receiving a request for generation of synthetic data based on a set of training data cases. One or more focal training data cases are determined. For undetermined features (either all of them or those that are not subject to conditions), a value for the feature is determined based on the focal cases. In some embodiments, the generated synthetic data may be checked for similarity against the training data, and if similarity conditions are met, it may be modified (e.g., resampled), removed, and/or replaced.