Patent classifications
G06V10/7784
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
An information processing device is provided that includes an operation control unit which controls the operations of an autonomous mobile object that performs an action according to a recognition operation. Based on the detection of the start of teaching related to pattern recognition learning, the operation control unit instructs the autonomous mobile object to obtain information regarding the learning target that is to be learnt in a corresponding manner to a taught label. Moreover, an information processing method is provided that is implemented in a processor and that includes controlling the operations of an autonomous mobile object which performs an action according to a recognition operation. Based on the detection of the start of teaching related to pattern recognition learning, the controlling of the operations includes instructing the autonomous mobile object to obtain information regarding the learning target that is to be learnt in a corresponding manner to a taught label.
METHOD OF IDENTIFYING FILTERS IN A NEURAL NETWORK, SYSTEM AND STORAGE MEDIUM OF THE SAME
A computer-implemented method of identifying filters for use in determining explainability of a trained neural network. The method comprises obtaining a dataset comprising the input image and an annotation of an input image, the annotation indicating at least one part of the input image which is relevant for inferring classification of the input image, determining an explanation filter set by iteratively: selecting a filter of the plurality of filters; adding the filter to the explanation filter set; computing an explanation heatmap for the input image by resizing and combining an output of each filter in the explanation filter set to obtain the explanation heatmap, the explanation heatmap having a spatial resolution of the input image; and computing a similarity metric by comparing the explanation heatmap to the annotation of the input image; until the similarity metric is greater than or equal to a similarity threshold; and outputting the explanation filter set.
IDENTIFICATION SYSTEM, MODEL RE-LEARNING METHOD AND PROGRAM
Learning means 701 learns a model for identifying an object indicated by data by using training data. First identification means 702 identifies the object indicated by the data by using the model learned by the learning means 701. Second identification means 703 identifies the object indicated by the data as an identification target used by the first identification means 702 by using a model different from the model learned by the learning means 701. The learning means 701 re-learns the model by using the training data including the label for the data determined based on the identification result derived by the second identification means 703 and the data.
SYSTEM FOR AUTOMATIC TUMOR DETECTION AND CLASSIFICATION
Certain aspects of the present disclosure provide techniques for automatically detecting and classifying tumor regions in a tissue slide. The method generally includes obtaining a digitized tissue slide from a tissue slide database and determining, based on output from a tissue classification module, a type of tissue of shown in the digitized tissue slide. The method further includes determining, based on output from a tumor classification model for the type of tissue, a region of interest (ROI) of the digitized tissue slide and generating a classified slide showing the ROI of the digitized tissue slide and an estimated diameter of the ROI. The method further includes displaying on an image display unit, the classified slide and user interface (UI) elements enabling a pathologist to enter input related to the classified slide.
Guided training for automation of content annotation
According to one implementation, a system for automating content annotation includes a computing platform having a hardware processor and a system memory storing an automation training software code. The hardware processor executes the automation training software code to initially train a content annotation engine using labeled content, test the content annotation engine using a first test set of content obtained from a training database, and receive corrections to a first automatically annotated content set resulting from the test. The hardware processor further executes the automation training software code to further train the content annotation engine based on the corrections, determine one or more prioritization criteria for selecting a second test set of content for testing the content annotation engine based on the statistics relating to the first automatically annotated content, and select the second test set of content from the training database based on the prioritization criteria.
METHODS AND SYSTEMS FOR ANNOTATION AND TRUNCATION OF MEDIA ASSETS
Methods and systems for improving the interactivity of media content. The methods and systems are particularly applicable to the e-learning space, which features unique problems in engaging with users, maintaining that engagement, and allowing users to alter media assets to their specific needs. To address these issues, as well as improving interactivity of media assets generally, the methods and systems described herein provide for annotation and truncation of media assets. More particularly, the methods and systems described herein provide features such as annotation guidance and video condensation.
DROWSINESS DETECTION FOR VEHICLE CONTROL
Systems, methods and apparatus of drowsiness detection for vehicle control. For example, a vehicle includes: a camera configured to face a driver of the vehicle and generate a sequence of images of the driver driving the vehicle; an artificial neural network configured to analyze the sequence of images and classify, based on the sequence of images, whether the driver is in a drowsy state; and an infotainment system configured to provide instructions to the driver in response to a classification by the artificial neural network that the driver is in the drowsy state.
Machine-Learned Models Featuring Matrix Exponentiation Layers
The present disclosure proposes a model that has more expressive power, e.g., can generalize from a smaller amount of parameters and assign more computation in areas of the function that need more computation. In particular, the present disclosure is directed to novel machine learning architectures that use the exponential of an input-dependent matrix as a nonlinearity. The mathematical simplicity of this architecture allows a detailed analysis of its behavior.
Machine-Learned Models Featuring Matrix Exponentiation Layers
The present disclosure proposes a model that has more expressive power, e.g., can generalize from a smaller amount of parameters and assign more computation in areas of the function that need more computation. In particular, the present disclosure is directed to novel machine learning architectures that use the exponential of an input-dependent matrix as a nonlinearity. The mathematical simplicity of this architecture allows a detailed analysis of its behavior, providing stringent robustness guarantees via Lipschitz bounds.
Systems and methods for image labeling using artificial intelligence
An image analysis (“IA”) computer system for analyzing images of hail damage includes at least one processor in communication with at least one memory device. The at least one processor is programmed to: (i) store a damage prediction model associated with a rooftop, wherein the damage prediction model utilizes an artificial intelligence algorithm; (ii) display, to a user, an image of a rooftop; (iii) receive, from the user, a request to analyze damage to the rooftop; (iv) apply, by the at least one processor, the damage prediction model to the image, the damage prediction model outputting a plurality of damage prediction locations of the rooftop in relation to the image; and/or (v) display, by the at least one processor, an overlay box at each of the plurality of damage prediction locations, the overlay box being a virtual object overlaid onto the image for labeling the damage prediction locations.