G06N3/0475

Medical image segmentation method based on U-Net

A medical image segmentation method based on a U-Net, including: sending real segmentation image and original image to a generative adversarial network for data enhancement to generate a composite image with a label; then putting the composite image into original data set to obtain an expanded data set, and sending the expanded data set to improved multi-feature fusion segmentation network for training. A Dilated Convolution Module is added between the shallow and deep feature skip connections of the segmentation network to obtain receptive fields with different sizes, which enhances the fusion of detail information and deep semantics, improves the adaptability to the size of the segmentation target, and improves the medical image segmentation accuracy. The over-fitting problem that occurs when training the segmentation network is alleviated by using the expanded data set of the generative adversarial network.

METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM FOR REMOTE DAMAGE ASSESSMENT OF VEHICLE
20230038645 · 2023-02-09 ·

A method for remote damage assessment of a vehicle is provided. The present disclosure relates to the technical field of artificial intelligence, in particular to the technical field of image and text recognition. An implementation solution is: performing data collection on a target vehicle to determine damage information of the target vehicle; obtaining call content of an insurance claiming call for the target vehicle, and extracting accident-related information from the call content, wherein the accident-related information includes named entities in the call content and a relationship between the named entities; and determining a first fraud probability corresponding to the target vehicle at least based on the damage information and the accident-related information.

OPTIMIZATION OF MEMORY USE FOR EFFICIENT NEURAL NETWORK EXECUTION

Implementations disclosed describe methods and systems to perform the methods of optimizing a size of memory used for accumulation of neural node outputs and for supporting multiple computational paths in neural networks. In one example, a size of memory used to perform neural layer computations is reduced by performing nodal computations in multiple batches, followed by rescaling and accumulation of nodal outputs. In another example, execution of parallel branches of neural node computations include evaluating, prior to the actual execution, the amount of memory resources needed to execute a particular order of branches sequentially and select the order that minimizes this amount or keeps this amount below a target threshold.

FAT SUPPRESSION USING NEURAL NETWORKS
20230041796 · 2023-02-09 · ·

In a method for determining a fat-reduced MR image, a first MR image is provided having, apart from the other tissue constituents, MR signals from only one of the two fat constituents, the first MR image is applied to a trained ANN, which was trained by first MR training data as the input data, the training data including, apart from the other tissue constituents, MR signals from only the one of the two fat constituents, and using second MR training data as a base knowledge, the second MR training data including, apart from the other tissue constituents, no MR signals from the two fat constituents; and an MR output image is determined from the trained ANN, to which the first MR image was applied, as a fat-reduced MR image, wherein the fat-reduced MR image includes, apart from the other tissue constituents, no MR signals from the two fat constituents.

Generating Non-Classical Measurements on Devices with Parameterized Time Evolution
20230042699 · 2023-02-09 ·

A quantum contextual measurement is generated from a quantum device capable of performing continuous time evolution, by generating a first measurement result and a second measurement result and combining the first measurement result and the second measurement result to generate the quantum contextual measurement. The first measurement result may be generated by initializing the quantum device to a first initial quantum state, applying a first continuous time evolution to the first initial state to generate a first evolved state, and measuring the first evolved state to generate the first measurement result. A similar process may be applied to generate a second evolved state which is at least approximately equal to the first evolved state, and then applying another continuous time evolution to the second evolved state to generate a third evolved state, and measuring the third evolved state to generate the second measurement result.

ANOMALY DETECTION USING USER BEHAVIORAL BIOMETRICS PROFILING METHOD AND APPARATUS

Techniques for determining anomalous user behavior in connection with an online application are disclosed. In one embodiment, a method is disclosed comprising obtaining user behavior data in connection with a user of an application, generating feature data using the obtained user behavior data, obtaining one or more user behavior anomaly predictions from one or more anomaly prediction models trained to output a user behavior anomaly prediction in response to the feature data. Each user behavior anomaly prediction indicates a probability that the user behavior is anomalous. A user behavior anomaly determination is made using the user behavior anomaly prediction(s).

SYSTEMS AND METHODS FOR ARCHITECTURE EMBEDDINGS FOR EFFICIENT DYNAMIC SYNTHETIC DATA GENERATION

Systems and methods for architecture embeddings for efficient dynamic synthetic data generation are disclosed. The disclosed systems and methods may include a system for generating synthetic data configured to perform operations. The operations may include retrieving a set of rules associated with a first data profile and generating, by executing a hyperparameter search, a plurality of hyperparameter sets for generative adversarial networks (GANs) that satisfy the set of rules. The operations may include generating mappings between the hyperparameter sets and the first data profile and storing the mappings in a hyperparameter library. The operations may include receiving a request for synthetic data, the request indicating a second data profile and selecting, from the mappings in the hyperparameter library, a hyperparameter set mapped to the second data profile. The operations may include building a GAN using the selected hyperparameter set and generating, using the GAN, a synthetic data set.

CONTROLLABLE NEURAL NETWORKS OR OTHER CONTROLLABLE MACHINE LEARNING MODELS
20230040176 · 2023-02-09 ·

A method includes obtaining (such as accessing, receiving, acquiring, etc.), using at least one processor of an electronic device, a machine learning model trained to process input data and generate output data over at least one range of values associated with one or more control variables. The method also includes providing, using the at least one processor, specified input data to the machine learning model and providing, using the at least one processor, one or more specified values of the one or more control variables to the machine learning model. The one or more specified values of the one or more control variables are within the at least one range of values. The method further includes performing inferencing using the machine learning model to process the specified input data and generate specified output data. The inferencing is controlled based on the one or more specified values of the control variable(s).

POWDER DEGRADATION PREDICTIONS

Examples of methods are described. In some examples, a method includes determining, using a variational autoencoder model, a latent space representation. In some examples, the latent space representation is of object model data. In some examples, the method includes predicting manufacturing powder degradation. In some examples, predicting the manufacturing powder degradation is based on the latent space representation.

TRAINING AND GENERALIZATION OF A NEURAL NETWORK
20230041290 · 2023-02-09 ·

A computer system (which may include one or more computers) that trains a neural network is described. During operation, the computer system may train the neural network based at least in part on a set of hyperparameters, where the training includes computing weights associated with neurons in the neural network. Moreover, during the training, the computer system may dynamically adapt one or more first hyperparameters in the set of hyperparameters based at least in part on a measure corresponding to a local geometry of a loss landscape at or proximate to a current location in the loss landscape. Note that the dynamic adapting based at least in part on the measure is separate from or in addition to a predefined adaptation of one or more second hyperparameters the set of hyperparameters based on a predefined number of iterations or cycles in the training or a predefined scaling factor.