Patent classifications
G06T3/4076
GENERATING HIGH RESOLUTION FIRE DISTRIBUTION MAPS USING GENERATIVE ADVERSARIAL NETWORKS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating high-resolution fire distribution maps. In some implementations, a computer-implemented system obtains a low-resolution distribution map indicating fire distribution of an area with fire burning and a reference map indicating features of the same area. The system processes the low-resolution distribution map and the reference map using a generator neural network to generate output data including a high-resolution synthesized distribution map indicating fire distribution of the area. The generator neural network is trained, based on a plurality of training examples, with a discriminator neural network that outputs a prediction of whether an input to the discriminator neural network is a real distribution map or a synthesized distribution map.
Median based frequency separation local area contrast enhancement
Local detail enhancement (LDE) is an imagery contrast enhancement method applied to visible and uncooled long wave imagery. It enhances local spatial detail through the use of a median based high/band pass filter. The generated detail channel is blended with a histogram-equalized version of the image, creating an image that contains both local detail as well as retaining some amount of global intensity. Retaining global intensity coherency allows for easier target acquisition when compared to fully local forms of contrast enhancement.
Picture processing method and device
The present disclosure provides a picture processing method and device, including: an integrated circuit chip IC receiving a to-be-processed picture sent by a graphics processor GPU; the IC pre-processing the to-be-processed picture; the IC performing counter-distortion process on the pre-processed picture; and the IC outputting the picture which is subjected to the counter-distortion process for display.
Apparatus and method for image processing, and system for training neural network
The present disclosure generally relates to the field of deep learning technologies. An apparatus for generating a plurality of correlation images may include a feature extracting unit configured to receive a training image and extracting at least one or more of feature from the training image to generate a first feature image based on the training image; a normalizer configured to normalize the first feature image and generate a second feature image; and a shift correlating unit configured to perform a plurality of translational shifts on the second feature image to generate a plurality of shifted images, correlate each of the plurality of shifted images with the second feature image to generate the plurality of correlation images.
Kernel-aware super resolution
An electronic device includes at least one imaging sensor and at least one processor coupled to the at least one imaging sensor. The at least one imaging sensor is configured to capture a burst of image frames. The at least one processor is configured to generate a low-resolution image from the burst of image frames. The at least one processor is also configured to estimate a blur kernel based on the burst of image frames. The at least one processor is further configured to perform deconvolution on the low-resolution image using the blur kernel to generate a deconvolved image. In addition, the at least one processor is configured to generate a high-resolution image using super resolution (SR) on the deconvolved image.
IMAGE RECONSTRUCTION METHOD, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
The disclosure provides an image reconstruction method for an edge device, an electronic device and a storage medium. The image reconstruction method includes: extracting low-level features from an input image of a first scale to generate first feature maps, the first feature maps having a second scale greater than the first scale as compared with the input image; extracting low-level features from the input image to generate second feature maps, the second feature maps having the second scale; generating mask maps based on the second feature maps; generating intermediate feature maps based on the mask maps and the first feature maps, the intermediate feature maps having the second scale; synthesizing a reconstructed image having the second scale based on the intermediate feature maps. This method facilitates to achieve a better image super-resolution reconstruction effect with lower resource consumption.
EXTRACTING REGION OF INTEREST FROM SCANNED IMAGES AND DETERMINING AN ASSOCIATED IMAGE TYPE THEREOF
ROI (Region of Interest) detection is an important step in extracting relevant information from a document image. Such images are very high-resolution images in nature and size of images is in order of megabytes, which makes text detection pipeline very slow. Traditional methods detect and extract ROI from images, but these work only for specific image types. Other approaches include deep learning (DL) based methods for ROI detect which need intensive training and require high end computing infrastructure/resources with graphical processing unit (GPU) capabilities. Systems and methods of the present disclosure perform ROI extraction by partitioning input image into parts based on its visual perception and then classify the image in first or second category. Region of interest is extracted from a resized image based on the classification by applying image processing techniques. Further, the system determines whether the input image is a pre-cropped image or a normal scanned image.
Systems and methods for blind multi-spectral image fusion
Systems, methods and apparatus for image processing for reconstructing a super resolution image from multispectral (MS) images. Receive image data and initialize a fused image using a panchromatic (PAN) image, and estimate a blur kernel between the PAN image and the MS images as an initialization function. Iteratively, fuse a MS image with an associated PAN image of a scene using a fusing algorithm. Each iteration includes: update the blur kernel based on a Second-Order Total Generalized Variation function to regularize a kernel shape; fuse the PAN image and MS images with the updated blur kernel based on a local Laplacian prior function to regularize the high-resolution information to obtain an estimated fused image; compute a relative error between the estimated fused image of the current iteration and a previous estimated fused image from a previous iteration, to a predetermined threshold, to stop iterations stop, to obtain a PAN-sharpened image.
METHOD AND SYSTEM FOR RECONSTRUCTING HIGH RESOLUTION VERSIONS OF LOW RESOLUTION IMAGES OF A CINE LOOP SEQUENCE
Systems and methods for reconstructing high resolution versions of low resolution cine sequence images are provided. The method includes acquiring a low resolution cine sequence, receiving a user input stopping the acquisition of the cine sequence, and initiating acquisition of a high resolution image. The method includes iteratively reconstructing a high resolution version of each of the low resolution images in the cine sequence in reverse, beginning with a last acquired low resolution image and ending with a first acquired low resolution image. The high resolution version of the last acquired low resolution image is reconstructed based on the last acquired low resolution image and the high resolution image. The high resolution version of each of the low resolution images prior to the last acquired low resolution image is iteratively reconstructed based on a respective low resolution image and the high resolution version of a subsequently acquired low resolution image.
Enhancing the resolution of a video stream
In one embodiment, a method includes accessing first-resolution images corresponding to frames of a video, computing a motion vector based on a first-resolution image of a first frame in the video and a first-resolution image of a second frame in the video, generating a second-resolution warped image associated with the second frame by using the motion vector to warp a second-resolution reconstructed image associated with the first frame, generating a second-resolution intermediate image associated with the second frame based on the first-resolution image associated with the second frame, computing adjustment parameters by processing the first-resolution image associated with the second frame and the second-resolution warped image associated with the second frame using a machine-learning model, and adjusting pixels of the second-resolution intermediate image associated with the second frame based on the adjustment parameters to reconstruct a second-resolution reconstructed image associated with the second frame.