G06V10/776

Navigating a vehicle based on data processing using synthetically generated images
11594016 · 2023-02-28 · ·

A user-generated graphical representation can be sent into a generative network to generate a synthetic image of an area including a road, the user-generated graphical representation including at least three different colors and each color from the at least three different colors representing a feature from a plurality of features. A determination can be made that a discrimination network fails to distinguish between the synthetic image and a sensor detected image. The synthetic image can be sent, in response to determining that the discrimination network fails to distinguish between the synthetic image and the sensor-detected image, into an object detector to generate a non-user-generated graphical representation. An objective function can be determined based on a comparison between the user-generated graphical representation and the non-user-generated graphical representation. A perception model can be trained using the synthetic image in response to determining that the objective function is within a predetermined acceptable range.

Method, system, and computer program product for implementing reinforcement learning

Provided is a method for implementing reinforcement learning by a neural network. The method may include performing, for each epoch of a first predetermined number of epochs, a second predetermined number of training iterations and a third predetermined number of testing iterations using a first neural network. The first neural network may include a first set of parameters, the training iterations may include a first set of hyperparameters, and the testing iterations may include a second set of hyperparameters. The testing iterations may be divided into segments, and each segment may include a fourth predetermined number of testing iterations. A first pattern may be determined based on at least one of the segments. At least one of the first set of hyperparameters or the second set of hyperparameters may be adjusted based on the pattern. A system and computer program product are also disclosed.

COLLECTION OF MACHINE LEARNING TRAINING DATA FOR EXPRESSION RECOGNITION

Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.

Machine learning inference user interface

Two-dimensional objects are displayed upon a user interface; user input selects an area and selects a machine learning model for execution. The results are displayed as an overlay over the objects in the user interface. User input selects a second model for execution; the result of this execution is displayed as a second overlay over the objects. A first overlay from a model is displayed over a set of objects in a user interface and a ground truth corresponding to the objects is displayed as a second overlay on the user interface. User input selects the ground truth overlay as a reference and causes a comparison of the first overlay with the ground truth overlay; the visual data from the comparison is displayed on the user interface. A comparison of M inference overlays with N reference overlays is performed and visual data from the comparison is displayed on the interface.

Machine learning inference user interface

Two-dimensional objects are displayed upon a user interface; user input selects an area and selects a machine learning model for execution. The results are displayed as an overlay over the objects in the user interface. User input selects a second model for execution; the result of this execution is displayed as a second overlay over the objects. A first overlay from a model is displayed over a set of objects in a user interface and a ground truth corresponding to the objects is displayed as a second overlay on the user interface. User input selects the ground truth overlay as a reference and causes a comparison of the first overlay with the ground truth overlay; the visual data from the comparison is displayed on the user interface. A comparison of M inference overlays with N reference overlays is performed and visual data from the comparison is displayed on the interface.

Misuse index for explainable artificial intelligence in computing environments

A mechanism is described for facilitating misuse index for explainable artificial intelligence in computing environments, according to one embodiment. A method of embodiments, as described herein, includes mapping training data with inference uses in a machine learning environment, where the training data is used for training a machine learning model. The method may further include detecting, based on one or more policy/parameter thresholds, one or more discrepancies between the training data and the inference uses, classifying the one or more discrepancies as one or more misuses, and creating a misuse index listing the one or more misuses.

Misuse index for explainable artificial intelligence in computing environments

A mechanism is described for facilitating misuse index for explainable artificial intelligence in computing environments, according to one embodiment. A method of embodiments, as described herein, includes mapping training data with inference uses in a machine learning environment, where the training data is used for training a machine learning model. The method may further include detecting, based on one or more policy/parameter thresholds, one or more discrepancies between the training data and the inference uses, classifying the one or more discrepancies as one or more misuses, and creating a misuse index listing the one or more misuses.

Urban remote sensing image scene classification method in consideration of spatial relationships
11710307 · 2023-07-25 · ·

An urban remote sensing image scene classification method in consideration of spatial relationships is provided and includes following steps of: cutting a remote sensing image into sub-images in an even and non-overlapping manner; performing a visual information coding on each of the sub-images to obtain a feature image Fv; inputting the feature image Fv into a crossing transfer unit to obtain hierarchical spatial characteristics; performing convolution of dimensionality reduction on the hierarchical spatial characteristics to obtain dimensionality-reduced hierarchical spatial characteristics; and performing a softmax model based classification on the dimensionality-reduced hierarchical spatial characteristics to obtain a classification result. The method comprehensively considers the role of two kinds of spatial relationships being regional spatial relationship and long-range spatial relationship in classification, and designs three paths in a crossing transfer unit for relationships fusion, thereby obtaining a better urban remote sensing image scene classification result.

Urban remote sensing image scene classification method in consideration of spatial relationships
11710307 · 2023-07-25 · ·

An urban remote sensing image scene classification method in consideration of spatial relationships is provided and includes following steps of: cutting a remote sensing image into sub-images in an even and non-overlapping manner; performing a visual information coding on each of the sub-images to obtain a feature image Fv; inputting the feature image Fv into a crossing transfer unit to obtain hierarchical spatial characteristics; performing convolution of dimensionality reduction on the hierarchical spatial characteristics to obtain dimensionality-reduced hierarchical spatial characteristics; and performing a softmax model based classification on the dimensionality-reduced hierarchical spatial characteristics to obtain a classification result. The method comprehensively considers the role of two kinds of spatial relationships being regional spatial relationship and long-range spatial relationship in classification, and designs three paths in a crossing transfer unit for relationships fusion, thereby obtaining a better urban remote sensing image scene classification result.

SYSTEM OF JOINT BRAIN TUMOR AND CORTEX RECONSTRUCTION
20180008187 · 2018-01-11 · ·

System for performing fully automatic brain tumor and tumor-aware cortex reconstructions upon receiving multi-modal MRI data (T1, T1c, T2, T2-Flair). The system outputs imaging which delineates distinctions between tumors (including tumor edema, and tumor active core), from white matter and gray matter surfaces. In cases where existing MRI model data is insufficient then the model is trained on-the-fly for tumor segmentation and classification. A tumor-aware cortex segmentation that is adaptive to the presence of the tumor is performed using labels, from which the system reconstructs and visualizes both tumor and cortical surfaces for diagnostic and surgical guidance. The technology has been validated using a publicly-available challenge dataset.