Patent classifications
G06V10/82
EXPLAINING A MODEL OUTPUT OF A TRAINED MODEL
The invention relates a computer-implemented method (500) of generating explainability information for explaining a model output of a trained model. The method uses one or more aspect recognition models configured to indicate a presence of respective characteristics in the input instance. A saliency method is applied to obtain a masked source representation of the input instance at a source layer of the trained model (e.g., the input layer or an internal layer), comprising those elements at the source layer relevant to the model output. The masked source representation is mapped to a target layer (e.g., input or internal layer) of an aspect recognition model, and the aspect recognition model is then applied to obtain a model output indicating a presence of the given characteristic relevant to the model output of the trained model. As explainability information, the characteristics indicated by the aspect recognition models are output.
METHOD AND PLATFORM OF GENERATING DOCUMENT, ELECTRONIC DEVICE AND STORAGE MEDIUM
A method and a platform of generating a document, an electronic device, and a storage medium are provided, which relate to a field of an artificial intelligence technology, in particular to fields of computer vision and deep learning technologies, and may be applied to a text recognition scenario and other scenarios. The method includes: performing a category recognition on a document picture to obtain a target category result; determining a target structured model matched with the target category result; and performing, by using the target structured model, a structure recognition on the document picture to obtain a structure recognition result, so as to generate an electronic document based on the structure recognition result, wherein the structure recognition result includes a field attribute recognition result and a field position recognition result.
A METHOD FOR TRAINING A NEURAL NETWORK TO DESCRIBE AN ENVIRONMENT ON THE BASIS OF AN AUDIO SIGNAL, AND THE CORRESPONDING NEURAL NETWORK
A neural network, a system using this neural network and a method for training a neural network to output a description of the environment in the vicinity of at least one sound acquisition device on the basis of an audio signal acquired by the sound acquisition device, the method including: obtaining audio and image training signals of a scene showing an environment with objects generating sounds, obtaining a target description of the environment seen on the image training signal, inputting the audio training signal to the neural network so that the neural network outputs a training description of the environment, and comparing the target description of the environment with the training description of the environment.
A METHOD FOR TRAINING A NEURAL NETWORK TO DESCRIBE AN ENVIRONMENT ON THE BASIS OF AN AUDIO SIGNAL, AND THE CORRESPONDING NEURAL NETWORK
A neural network, a system using this neural network and a method for training a neural network to output a description of the environment in the vicinity of at least one sound acquisition device on the basis of an audio signal acquired by the sound acquisition device, the method including: obtaining audio and image training signals of a scene showing an environment with objects generating sounds, obtaining a target description of the environment seen on the image training signal, inputting the audio training signal to the neural network so that the neural network outputs a training description of the environment, and comparing the target description of the environment with the training description of the environment.
AUTOMATED ASSESSMENT OF ENDOSCOPIC DISEASE
The application relates to devices and methods for analysing a colonoscopy video or a portion thereof, and for assessing the severity of ulcerative colitis in a subject by analysing a colonoscopy video obtained from the subject. Analysing a colonoscopy video comprises using a first deep neural network classifier to classify image data from the subject colonoscopy video or portion thereof into at least a first severity class (more severe endoscopic lesions) and a second severity class (less severe endoscopic lesions), wherein the first deep neural network has been trained at least in part in a weakly supervised manner using training image data from a plurality of training colonoscopy videos, the training image data comprising multiple sets of consecutive frames from the plurality of training colonoscopy videos, wherein frames in a set have the same severity class label. Devices and methods for providing a tool for analysing colonoscopy videos are also described.
Method of Diagnosis
The invention relates to methods for determining the stage of a disease, particularly an ocular neurodegenerative disease such as Alzheimer's, Parkinson's, Huntington's and glaucoma, comprising the steps of identifying the status of microglial cells in the retina and relating that status to disease stage. Methods for identifying cells in the eye are also provided, as are labelled markers and the use thereof.
Method of Diagnosis
The invention relates to methods for determining the stage of a disease, particularly an ocular neurodegenerative disease such as Alzheimer's, Parkinson's, Huntington's and glaucoma, comprising the steps of identifying the status of microglial cells in the retina and relating that status to disease stage. Methods for identifying cells in the eye are also provided, as are labelled markers and the use thereof.
Generating a Top View of a Motor Vehicle
A device generates a first top view of a motor vehicle depending on a first view-related information from at least one image of at least one camera whose optical axis is substantially parallel to a plane spanned by the vehicle longitudinal direction and the vehicle lateral direction.
Generating a Top View of a Motor Vehicle
A device generates a first top view of a motor vehicle depending on a first view-related information from at least one image of at least one camera whose optical axis is substantially parallel to a plane spanned by the vehicle longitudinal direction and the vehicle lateral direction.
METHOD FOR LEARNING REPRESENTATIONS FROM CLOUDS OF POINTS DATA AND A CORRESPONDING SYSTEM
A method for learning representations from clouds of points data includes encoding clouds of points data into at least one representation by creating at least one tensor representation out of the clouds of points data. The method further includes using a loss function that utilizes a noisy reconstruction for reducing overfitting.