G06F18/24317

Method and device for reliably identifying objects in video images
11580332 · 2023-02-14 · ·

A computer-implemented method for reliably identifying objects in a sequence of input images received with the aid of an imaging sensor, positions of light sources in the respective input image being ascertained from the input images in each case with the aid of a first machine learning system, in particular, an artificial neural network, and objects from the sequence of input images being identified from the resulting sequence of positions of light sources, in particular, with the aid of a second machine learning system, in particular, with the aid of an artificial neural network.

Detecting out-of-model scenarios for an autonomous vehicle

Detecting out-of-model scenarios for an autonomous vehicle including: determining, based on first sensor data from one or more sensors, an environmental state relative to the autonomous vehicle, wherein operational commands for the autonomous vehicle are based on a selected machine learning model, wherein the selected machine learning model comprises a first machine learning model; comparing the environmental state to a predicted environmental state relative to the autonomous vehicle; and determining, based on a differential between the environmental state and the predicted environmental state, whether to select a second machine learning model as the selected machine learning model.

DETECTION OF PLANT DISEASES WITH MULTI-STAGE, MULTI-SCALE DEEP LEARNING
20230225239 · 2023-07-20 ·

A computer system is provided comprising a classification model management server computer configured, by instructions, to: receive a new image from a user device; apply a first digital model to first regions within the new image for classifying each of the first regions into a particular class; apply a second digital model to second regions within the new image for classifying each of the second regions into a particular class; and transmit classification data related to the class of the first regions and the class of the second regions to the user device. In connection therewith, the second regions each generally correspond to a combination of multiple first regions.

SYSTEM AND METHOD FOR DETERMINING DAMAGE ON CROPS

A computer-implemented method, computer program product and computer system (100) for determining the impact of herbicides on crop plants (11) in an agricultural field (10). The system includes an interface (110) to receive an image (20) with at least one crop plant representing a real world situation in the agricultural field (10) after herbicide application. An image pre-processing module (120) rescales the received image (20) to a rescaled image (20a) matching the size of an input layer of a first fully convolutional neural network (CNN1) referred to as the first CNN. The first CNN is trained to segment the rescaled image (20a) into crop (11) and non-crop (12, 13) portions, and provides a first segmented output (20s1) indicating the crop portions (20c) of the rescaled image with pixels belonging to representations of crop. A second fully convolutional neural network (CNN2), referred to as the second CNN, is trained to segment said crop portions into a second segmented output (20s2) with one or more sub-portions (20n, 20l) with each sub-portion including pixels associated with damaged parts of the crop plant showing a respective damage type (11-1, 11-2). A damage measurement module (130) determines a damage measure (131) for the at least one crop plant for each damage type (11-1, 11-2) based on the respective sub-portions of the second segmented output (20s2) in relation to the crop portion of the first segmented output (20s1).

METHODS AND APPARATUS FOR VISUAL-AWARE HIERARCHY-BASED OBJECT RECOGNITION

The techniques described herein relate to computerized methods and apparatus for grouping images of objects based on semantic and visual information associated with the objects. The techniques described herein further relate to computerized methods and apparatus for training a machine learning model for object recognition.

Weakly-Supervised Action Localization by Sparse Temporal Pooling Network
20230215169 · 2023-07-06 ·

Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.

Morphometric detection of malignancy associated change

A method for a system and method for morphometric detection of malignancy associated change (MAC) is disclosed including the acts of obtaining a sample; imaging cells to produce 3D cell images for each cell; measuring a plurality of different structural biosignatures for each cell from its 3D cell image to produce feature data; analyzing the feature data by first using cancer case status as ground truth to supervise development of a classifier to test the degree to which the features discriminate between cells from normal or cancer patients; using the analyzed feature data to develop classifiers including, a first classifier to discriminate normal squamous cells from normal and cancer patients, a second classifier to discriminate normal macrophages from normal and cancer patients, and a third classifier to discriminate normal bronchial columnar cells from normal and cancer patients.

RECOGNITION APPARATUS AND PROGRAM

According to an embodiment, the recognition apparatus includes an image interface, an input interface, and a processor. The image interface is configured to acquire a display screen image from an input device for inputting a character string included in a captured image in which recognition of the character string according to a first algorithm fails. The input interface is configured to input the character string to the input device. The processor is configured to acquire a result of character recognition processing performed on the display screen image according to a second algorithm different from the first algorithm, and input the character string based on the result of the character recognition processing to the input device through the input interface.

Deep neural network system for similarity-based graph representations

There is described a neural network system implemented by one or more computers for determining graph similarity. The neural network system comprises one or more neural networks configured to process an input graph to generate a node state representation vector for each node of the input graph and an edge representation vector for each edge of the input graph; and process the node state representation vectors and the edge representation vectors to generate a vector representation of the input graph. The neural network system further comprises one or more processors configured to: receive a first graph; receive a second graph; generate a vector representation of the first graph; generate a vector representation of the second graph; determine a similarity score for the first graph and the second graph based upon the vector representations of the first graph and the second graph.

VALIDATION OF AI-BASED RESULT DATA

In a method, comparison features are extracted from labeled reference image data. Features are also extracted from the image data. A statistical comparison of the comparison features with the features then takes place. On the basis of the statistical comparison and a quality criterion, the quality of the AI-based result data is determined. A method for correcting result data is additionally described. Furthermore, a method for AI-based acquisition of result data on the basis of measured examination data is described. Also described is a validation entity. An entity for correcting result data is additionally described. Furthermore, an entity for acquiring result data is described. Also described is a medical imaging entity.