G06F18/2185

Classifying images utilizing generative-discriminative feature representations

The present disclosure relates to systems, non-transitory computer-readable media, and methods for classifying an input image utilizing a classification model conditioned by a generative model and/or self-supervision. For example, the disclosed systems can utilize a generative model to generate a reconstructed image from an input image to be classified. In turn, the disclosed systems can combine the reconstructed image with the input image itself. Using the combination of the input image and the reconstructed image, the disclosed systems utilize a classification model to determine a classification for the input image. Furthermore, the disclosed systems can employ self-supervised learning to cause the classification model to learn discriminative features for better classifying images of both known classes and open-set categories.

APPARATUS AND METHODS FOR GENERATING DENOISING MODEL
20230230208 · 2023-07-20 · ·

Described herein is a method for training a denoising model. The method includes obtaining a first set of simulated images based on design patterns. The simulated images may be clean and can be added with noise to generate noisy simulated images. The simulated clean and noisy images are used as training data to generate a denoising model.

SYSTEMS AND METHODS FOR DYNAMIC ARTIFICIAL INTELLIGENCE (AI) GRAPHICAL USER INTERFACE (GUI) GENERATION

Systems, apparatus, interfaces, methods, and articles of manufacture that provide for Artificial Intelligence (AI) User Interface (UI) and/or Graphical User Interface (GUI) generation.

CORRECTING DIFFERENCES IN MULTI-SCANNERS FOR DIGITAL PATHOLOGY IMAGES USING DEEP LEARNING

The present disclosure relates to techniques for transforming digital pathology images obtained by different slide scanners into a common format for image analysis. Particularly, aspects of the present disclosure are directed to obtaining a source image of a biological specimen, the source image is generated from a first type of scanner, inputting into a generator model a randomly generated noise vector and a latent feature vector from the source image as input data, generating, by the generator model, a new image based on the input data, inputting into a discriminator model the new image, generating, by the discriminator model, a probability for the new image being authentic or fake, determining whether the new image is authentic or fake based on the generated probability, and outputting the new image when the image is authentic.

TRAINING AND IMPLEMENTING MACHINE-LEARNING MODELS UTILIZING MODEL CONTAINER WORKFLOWS

The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a pre-defined model container workflow allowing computing devices to flexibly and efficiently define, train, deploy, and maintain machine-learning models. For instance, the disclosed systems can provide scaffolding and boilerplate code for machine-learning models. To illustrate, boilerplate code can include predetermined designs of base classes for common use cases like training, batch inference, etc. In addition, the scaffolding provides an opinionated directory structure for organizing code of a machine-learning model. Further, the disclosed systems can provide containerization and various tooling (e.g., command interface tooling, platform upgrade tooling, and model repository management tooling). Additionally, the disclosed systems can provide out of the box compatibility with one or more different compute instances for increased flexibility and cross-system integration.

Characterizing failures of a machine learning model based on instance features

The present disclosure relates to systems, methods, and computer readable media that evaluate performance of a machine learning system in connection with a test dataset. For example, systems disclosed herein may receive a test dataset and identify label information for the test dataset including feature information and ground truth data. The systems disclosed herein can compare the ground truth data and outputs generated by a machine learning system to evaluate performance of the machine learning system with respect to the test dataset. The systems disclosed herein may further generate feature clusters based on failed outputs and corresponding features and generate a number of performance views that illustrate performance of the machine learning system with respect to clustered groupings of the test dataset.

Efficient verification of machine learning applications

An example operation may include one or more of generating, by a training participant client comprising a training dataset, a plurality of transaction proposals that each correspond to a training iteration for machine learning model training related to stochastic gradient descent, the machine learning model training comprising a plurality of training iterations, the transaction proposals comprising a gradient calculation performed by the training participant client, a batch from the private dataset, a loss function, and an original model parameter, receiving, by one or more endorser nodes of peers of a blockchain network, the plurality of transaction proposals, and evaluating each transaction proposal.

Straggler mitigation for iterative machine learning via task preemption

Embodiments of the present invention provide computer-implemented methods, computer program products and systems. Embodiments of the present invention can run preemptable tasks distributed according to a distributed environment, wherein each task of a plurality of preemptable tasks has been assigned two or more of the training data samples to process during each iteration. Embodiments of the present invention can, upon verifying that a preemption condition for each iteration is satisfied: preempt any task of the preemptable tasks that have started processing training data samples assigned to it, and update the cognitive model based on outputs obtained from completed tasks, including outputs obtained from both the preempted tasks and completed tasks that have finished processing all training data samples as assigned to it.

SELECTION METHOD OF LEARNING DATA AND COMPUTER SYSTEM
20230019364 · 2023-01-19 · ·

A computer system accurately selects learning data for improving a prediction accuracy of a predictor, and is connected to a database that stores a plurality of pieces of learning data and information for managing a plurality of predictors generated under different learning conditions. A target predictor is selected, an influence degree representing strength of an influence of the learning data on a prediction accuracy of the target predictor for test data is calculated for each of a plurality of pieces of test data, an influence score of the learning data is calculated for the plurality of predictors based on a plurality of influence degrees of the learning data associated with the predictors, and the learning data to be used is selected from the plurality of pieces of learning data on the basis of a plurality of the influence scores of each of the plurality of pieces of learning data.

Collaborative information extraction

Embodiments relate to a system, program product, and method for information extraction and annotation of a data set. Neural models are utilized to automatically attach machine annotations to data elements within an unlabeled data set. The attached machine annotations are evaluated and a score is attached to the annotations. The score reflects a confidence of correctness of the annotations. A labeled data set is iteratively expanded with selectively evaluated annotations based on the attached score. The labeled data set is applied to an unexplored corpus to identify matching corpus data to populated instances of the labeled data set.