G06V10/72

Detecting changes in forest composition

A method of producing a model to detect changes in forest cover is disclosed. The method includes obtaining forest-cover classification data of a land area. The land area includes one or more subregions having unchanged forest-cover classifications between a first time and a second time. The method further includes obtaining image data of the subregions at multiple times. For at least one forest-cover classification, the method includes applying a statistical analysis to the image data to determine one or more threshold values representing measurement variations. The method further includes comparing subsequently obtained image data to the one or more threshold values and classifying the one or more subregions as changed or unchanged based on the comparison of subsequently obtained image data to the one or more threshold values.

Detecting changes in forest composition

A method of producing a model to detect changes in forest cover is disclosed. The method includes obtaining forest-cover classification data of a land area. The land area includes one or more subregions having unchanged forest-cover classifications between a first time and a second time. The method further includes obtaining image data of the subregions at multiple times. For at least one forest-cover classification, the method includes applying a statistical analysis to the image data to determine one or more threshold values representing measurement variations. The method further includes comparing subsequently obtained image data to the one or more threshold values and classifying the one or more subregions as changed or unchanged based on the comparison of subsequently obtained image data to the one or more threshold values.

GENERATING IMAGE FEATURES BASED ON ROBUST FEATURE-LEARNING
20180005070 · 2018-01-04 ·

Techniques for increasing robustness of a convolutional neural network based on training that uses multiple datasets and multiple tasks are described. For example, a computer system trains the convolutional neural network across multiple datasets and multiple tasks. The convolutional neural network is configured for learning features from images and accordingly generating feature vectors. By using multiple datasets and multiple tasks, the robustness of the convolutional neural network is increased. A feature vector of an image is used to apply an image-related operation to the image. For example, the image is classified, indexed, or objects in the image are tagged based on the feature vector. Because the robustness is increased, the accuracy of the generating feature vectors is also increased. Hence, the overall quality of an image service is enhanced, where the image service relies on the image-related operation.

A Method for Testing an Embedded System of a Device, a Method for Identifying a State of the Device and a System for These Methods

A method of testing an embedded system of the device using a testing robot, a central control unit and a device under test. The device under test may be in different states, wherein the states are determined using the testing robot with a visual sensor. After the state of the device is determined, the testing robot interacts with the device under test and changes its state to a new state.

A Method for Testing an Embedded System of a Device, a Method for Identifying a State of the Device and a System for These Methods

A method of testing an embedded system of the device using a testing robot, a central control unit and a device under test. The device under test may be in different states, wherein the states are determined using the testing robot with a visual sensor. After the state of the device is determined, the testing robot interacts with the device under test and changes its state to a new state.

SYSTEMS AND METHODS FOR RAPID DEVELOPMENT OF OBJECT DETECTOR MODELS

A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.

SYSTEMS AND METHODS FOR RAPID DEVELOPMENT OF OBJECT DETECTOR MODELS

A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.

COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, METHOD OF PROCESSING INFORMATION, AND INFORMATION PROCESSING APPARATUS
20230230357 · 2023-07-20 · ·

A non-transitory computer-readable recording medium stores an information processing program for causing a computer to execute a process including: extracting a first feature from an image; detecting, from the extracted first feature, a plurality of visual entities included in the image; generating a second feature in which the visual entities in at least one combination of the plurality of detected visual entities are combined, in first feature, with each other; generating, based on the first feature and the second feature, a first map that indicates relation of each visual entity; extracting a fourth feature based on the first map and a third feature obtained by converting the first feature; and estimating the relation from the fourth feature.

COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, METHOD OF PROCESSING INFORMATION, AND INFORMATION PROCESSING APPARATUS
20230230357 · 2023-07-20 · ·

A non-transitory computer-readable recording medium stores an information processing program for causing a computer to execute a process including: extracting a first feature from an image; detecting, from the extracted first feature, a plurality of visual entities included in the image; generating a second feature in which the visual entities in at least one combination of the plurality of detected visual entities are combined, in first feature, with each other; generating, based on the first feature and the second feature, a first map that indicates relation of each visual entity; extracting a fourth feature based on the first map and a third feature obtained by converting the first feature; and estimating the relation from the fourth feature.

Action recognition method and apparatus

An action recognition method and apparatus related to artificial intelligence and include extracting a spatial feature of a to-be-processed picture, determining a virtual optical flow feature of the to-be-processed picture based on the spatial feature and X spatial features and X optical flow features in a preset feature library, where the X spatial features and the X optical flow features include a one-to-one correspondence, determining a first type of confidence of the to-be-processed picture in different action categories based on similarities between the virtual optical flow feature and Y optical flow features, where each of the Y optical flow features in the preset feature library corresponds to one action category, X and Y are both integers greater than 1, and determining an action category of the to-be-processed picture based on the first type of confidence.