Patent classifications
G06T5/75
TRAINING IMAGE-PROCESSING NEURAL NETWORKS BY SYNTHETIC PHOTOREALISTIC INDICIA-BEARING IMAGES
Systems and methods for training image processing neural networks by synthetic photorealistic indicia-bearing images. An example method comprises: generating an initial set of images, wherein each image of the initial set of images comprises a rendering of a text string; producing an augmented set of images by processing the initial set of images to introduce, into each image of the initial set of image, at least one simulated image defect; generating a training dataset comprising a plurality of pairs of images, wherein each pair of images comprises a first image selected from the initial set of images and a second image selected from the augmented set of images; and training, using the training dataset, a convolutional neural network for image processing.
Image processing apparatus
An image processing apparatus includes: a plurality of micro-lenses arranged in a two-dimensional pattern so that a subject light through an image forming optical system enter there; a plurality of light receiving elements disposed in a vicinity of a focal position at rear side of the micro-lenses to correspond to the plurality of micro-lenses respectively that receive the subject light through the micro-lenses; an image synthesizing unit that synthesizes an image on a focal plane that is different from a predetermined focal plane of the image forming optical system based upon outputs from the plurality of light receiving elements; and a processing unit that, based upon at least an objective image in a vicinity of the plurality of micro-lenses, and an auxiliary image outside the vicinity of the plurality of micro-lenses, which are synthesized by the image synthesizing unit, performs a process to enhance a resolution of the objective image.
Computational blur for varifocal displays
Methods are disclosed herein to blur an image to be displayed on a stereo display (such as virtual or augmented reality displays) based on the focus and convergence of the user. The methods approximate the complex effect of chromatic aberration on focus, utilizing three (R/G/B) simple Gaussian blurs. For transparency the methods utilize buffers for levels of blur rather than depth. The methods enable real-time chromatic-based blurring effects for VR/AR displays.
Radiation image processing apparatus
An image processing apparatus includes the following. A hardware processor decomposes a signal value of input image data into band-limited signals having different frequency bands from each other. A storage stores pieces of preset data. Each of the pieces of preset data comprises tables to associate frequency with a response and to prescribe different response properties from each other. The hardware processor selects a piece of preset data from the pieces of preset data stored in the storage, converts the decomposed band-limited signals on a basis of tables in the selected piece of preset data, reconstructs the converted band-limited signals into enhanced image data, and generates a frequency-enhanced image through addition of the enhanced image data which is multiplied by a predetermined enhancement coefficient to the input image data.
Image processing apparatus and image processing method
The present disclosure relates to an image processing apparatus, an image processing method, and a program therefor with which slow shutter photographing can be performed with ease. An image gradient extraction unit extracts image gradient components in an object movement direction from a long-exposure image out of input images. An initial label map generation unit generates an initial label map based on a gradient extraction result from the image gradient extraction unit. A foreground extraction unit extracts a foreground from the input images based on the label map from the initial label map generation unit or label map update unit to generate a synthesis mask. The present disclosure is applicable to, for example, an image pickup apparatus including an image processing function.
ELECTRONIC DEVICE FOR BLURRING IMAGE OBTAINED BY COMBINING PLURAL IMAGES BASED ON DEPTH INFORMATION AND METHOD FOR DRIVING THE ELECTRONIC DEVICE
An electronic device having a first and second camera deployed on one side of the electronic device, a memory, and at least one processor configured to acquire a plurality of first image frames for external objects using the first camera based on an input corresponding to a photographing signal, acquire a second image frame for the external objects using the second camera while acquiring parts of the first image frames, generate depth information for the external objects based on the image frame corresponding to the second image frame among the plurality of first image frames and the second image frame, generate a first corrected image by combining a plurality of designated image frames among the plurality of first image frames, and generate a second corrected image in which parts of the external objects included in the first corrected image are blurred based on the depth information.
Iterative digital subtraction imaging fro emoblization procedures
Method and related system (IPS) for visualizing in particular a volume of a substance during its deposition at a region of interest (ROI). A difference image is formed from a projection image and a mask image. The difference image is then analyzed to derive more accurate motion information about a motion or shape of the substance. The method or system (IPS) is capable of operating in an iterative manner. The proposed system and method can be used for processing fluoroscopic X-ray frame acquired by an imaging arrangement (100) during an embolization procedure.
Systems and methods of forming enhanced medical images
Systems and methods of producing medical images of a subject are disclosed herein. In one embodiment, structural data and vascular data are acquired from a region of interest in the subject. A filter is generated using structural image data acquired from a second layer and blood flow image data received from a first layer in the region of interest. The filter is applied to vascular image data acquired from a second, deeper layer in the region of interest to form an image of the second layer having reduced tailing artifacts relative to the unfiltered vascular image data.
Denoising method and denoising device for reducing noise in an image
A method of reducing noise in an input image by setting, as a local window among color pixels included in the input image, a target pixel and neighboring pixels adjacent to the target pixel, determining color pixel values for the target pixel and each of the neighboring pixels included in the local window, generating local color average values are generated by averaging, color by color, the color pixel values, generating offset color pixel values by converting the color pixel values of the target pixel and the neighboring pixels based on the local color average values, and generating a compensated color pixel value of the target pixel by adjusting the color pixel value of the target pixel based on the offset color pixel values.
Compressing generative adversarial neural networks
This disclosure describes one or more embodiments of systems, non-transitory computer-readable media, and methods that utilize channel pruning and knowledge distillation to generate a compact noise-to-image GAN. For example, the disclosed systems prune less informative channels via outgoing channel weights of the GAN. In some implementations, the disclosed systems further utilize content-aware pruning by utilizing a differentiable loss between an image generated by the GAN and a modified version of the image to identify sensitive channels within the GAN during channel pruning. In some embodiments, the disclosed systems utilize knowledge distillation to learn parameters for the pruned GAN to mimic a full-size GAN. In certain implementations, the disclosed systems utilize content-aware knowledge distillation by applying content masks on images generated by both the pruned GAN and its full-size counterpart to obtain knowledge distillation losses between the images for use in learning the parameters for the pruned GAN.