Patent classifications
G06F18/21347
SELF ENSEMBLING TECHNIQUES FOR GENERATING MAGNETIC RESONANCE IMAGES FROM SPATIAL FREQUENCY DATA
Techniques for generating magnetic resonance (MR) images of a subject from MR data obtained by a magnetic resonance imaging (MRI) system, the techniques including: obtaining input MR data obtained by imaging the subject using the MRI system; generating a plurality of transformed input MR data instances by applying a respective first plurality of transformations to the input MR data; generating a plurality of MR images from the plurality of transformed input MR data instances and the input MR data using a non-linear MR image reconstruction technique; generating an ensembled MR image from the plurality of MR images at least in part by: applying a second plurality of transformations to the plurality of MR images to obtain a plurality of transformed MR images; and combining the plurality of transformed MR images to obtain the ensembled MR image; and outputting the ensembled MR image.
Systems and methods for coupled representation using transform learning for solving inverse problems
This disclosure relates to systems and methods for solving generic inverse problems by providing a coupled representation architecture using transform learning. Convention solutions are complex, require long training and testing times, reconstruction quality also may not be suitable for all applications. Furthermore, they preclude application to real-time scenarios due to the mentioned inherent lacunae. The methods provided herein require involve very low computational complexity with a need for only three matrix-vector products, and requires very short training and testing times, which makes it applicable for real-time applications. Unlike the conventional learning architectures using inductive approaches, the CASC of the present disclosure can learn directly from the source domain and the number of features in a source domain may not be necessarily equal to the number of features in a target domain.
Rare pose data generation
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating rare pose data. One of the methods includes obtaining a three-dimensional model of a dynamic object, wherein the dynamic object has multiple movable elements that define a plurality of poses of the dynamic object. A plurality of template poses of the dynamic object are used to generate additional poses for the dynamic object including varying angles of one or more key joints of the dynamic object according to the three-dimensional model. Point cloud data is generated for the additional poses generated for the dynamic object.
Deep learning techniques for generating magnetic resonance images from spatial frequency data
Techniques for generating magnetic resonance (MR) images of a subject from MR data obtained by a magnetic resonance imaging (MRI) system, the techniques include: obtaining input MR spatial frequency data obtained by imaging the subject using the MRI system; generating an MR image of the subject from the input MR spatial frequency data using a neural network model comprising: a pre-reconstruction neural network configured to process the input MR spatial frequency data; a reconstruction neural network configured to generate at least one initial image of the subject from output of the pre-reconstruction neural network; and a post-reconstruction neural network configured to generate the MR image of the subject from the at least one initial image of the subject.
Deep learning techniques for alignment of magnetic resonance images
Generating magnetic resonance (MR) images of a subject from MR data obtained by a magnetic resonance imaging (MRI) system by: generating first and second sets of one or more MR images from first and second input MR data; aligning the first and second sets of MR images using a neural network model comprising first and second neural networks, the aligning comprising: estimating, using the first neural network, a first transformation between the first and second sets of MR images; generating a first updated set of MR images from the second set of MR images using the first transformation; estimating, using the second neural network, a second transformation between the first set and the first updated set of MR images; and aligning the first set of MR images and the second set of MR images at least in part by using the first transformation and the second transformation.
FOURIER TRANSFORM-BASED IMAGE SYNTHESIS USING NEURAL NETWORKS
Apparatuses, systems, and techniques to scale textured images using a Fourier transform in conjunction with one or more neural networks. In at least one embodiment, a neural network generates an expanded image from an input image by applying a Fourier transform to one or more feature maps generated by said neural network and up-scaling one or more resulting frequency domain feature maps before generating an expanded output image based on up-scaled feature maps.
TRANSFER LEARNING WITH MACHINE LEARNING SYSTEMS
Transfer learning in machine learning can include receiving a machine learning model. Target domain training data for reprogramming the machine learning model using transfer learning can be received. The target domain training data can be transformed by performing a transformation function on the target domain training data. Output labels of the machine learning model can be mapped to target labels associated with the target domain training data. The transformation function can be trained by optimizing a parameter of the transformation function. The machine learning model can be reprogrammed based on input data transformed by the transformation function and a mapping of the output labels to target labels.
TRAINING IMAGE-TO-IMAGE TRANSLATION NEURAL NETWORKS
A method includes obtaining a source training dataset that includes a plurality of source training images and obtaining a target training dataset that includes a plurality of target training images. For each source training image, the method includes translating, using the forward generator neural network G, the source training image to a respective translated target image according to current values of forward generator parameters. For each target training image, the method includes translating, using a backward generator neural network F, the target training image to a respective translated source image according to current values of backward generator parameters. The method also includes training the forward generator neural network G jointly with the backward generator neural network F by adjusting the current values of the forward generator parameters and the backward generator parameters to optimize an objective function.
Generating object embeddings from images
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an object embedding system. In one aspect, a method comprises providing selected images as input to the object embedding system and generating corresponding embeddings, wherein the object embedding system comprises a thumbnailing neural network and an embedding neural network. The method further comprises backpropagating gradients based on a loss function to reduce the distance between embeddings for same instances of objects, and to increase the distance between embeddings for different instances of objects.
Weeding systems and methods, railway weeding vehicles
A weeding system for a railway weeding vehicle comprising a camera and a spraying unit with several supply modules, a nozzle and a controller module to receive a weed species detection signal and to command the spraying of chemical agent. The weeding system also comprises a weed species identification unit with a communication module, a memory module and a processing module having several parallel processing cores. Each parallel processing core performs a convolution operation between a sub-matrix constructed from nearby pixels of the image and a predefined kernel stored in the memory module to obtain a feature representation sub-matrix of the pixel values of the image. The processing module computes a probability of presence of a weed species from the feature representation matrix and generates a weed species detection signal.