G06T3/4069

Optical image stabilization movement to create a super-resolution image of a scene

The present disclosure describes systems and techniques directed to optical image stabilization movement to create a super-resolution image of a scene. The systems and techniques include a user device (102) introducing (502), through an optical image stabilization system (114), movement to one or more components of a camera system (112) of the user device (102). The user device (102) then captures (504) respective and multiple frames (306) of an image of a scene, where the respective and multiple frames (306) of the image of the scene have respective, sub-pixel offsets of the image of the scene across the multiple frames (306) as a result of the introduced movement to the one or more components of the camera system (112). The user device (102) performs (506), based on the respective, sub-pixel offsets of the image of the scene across the respective, multiple frames (306), super-resolution computations and creates (508) the super-resolution image of the scene based on the super-resolution computations.

System and method for multiscale deep equilibrium models

A computer-implemented method for a classification and training a neural network includes receiving input at the neural network, wherein the input includes a plurality of resolution inputs of varying resolutions, outputting a plurality of feature tensors for each corresponding resolution of the plurality of resolution inputs, fusing the plurality of feature tensors utilizing upsampling or down sampling for the vary resolutions, utilizing an equilibrium solver to identify one or more prediction vectors from the plurality of feature tensors, and outputting a loss in response to the one or more prediction vectors.

SYSTEMS AND METHODS FOR STRUCTURED ILLUMINATION MICROSCOPY

The technology disclosed relates to structured illumination microscopy (SIM). In particular, the technology disclosed relates to capturing and processing, in real time, numerous image tiles across a large image plane, dividing them into subtiles, efficiently processing the subtiles, and producing enhanced resolution images from the subtiles. The enhanced resolution images can be combined into enhanced images and can be used in subsequent analysis steps.

IMAGE PROCESSING DEVICES AND METHODS

A still or motion imaging device generates a plurality of image frames with a sensor and processes frames to generate an output image frame. The imaging device can apply some or all of de-noising, resolution enhancement, high dynamic range processing, image development functions, pre-emphasis, and compression to the image frames, while deferring tonal processing.

Image sensor and camera module using same

An image sensor according to an embodiment of the present invention includes: a pixel array in which a plurality of pixels are arrayed in a grid shape, and which converts reflection light signals reflected from an object into electrical signals; an image processor which converts the electrical signals to generate subframes, and extracts pieces of second depth information having a higher resolution than pieces of first depth information extracted from a plurality of the subframes; and a memory for storing the pieces of first depth information, wherein the reflection light signals are input to the pixel array through mutually different optical paths shifted in sub-pixel units of the pixel array, and the memory stores a plurality of the pieces of first depth information that correspond to the mutually different optical paths.

System and method for depth map recovery

A method for reconstructing a downsampled depth map includes receiving, at an electronic device, image data to be presented on a display of the electronic device at a first resolution, wherein the image data includes a color image and the downsampled depth map associated with the color image. The method further includes generating a high resolution depth map by calculating, for each point making up the first resolution, a depth value based on a normalized pose difference across a neighborhood of points for the point, a normalized color texture difference across the neighborhood of points for the point, and a normalized spatial difference across the neighborhood of points. Still further, the method includes outputting, on the display, a reprojected image at the first resolution based on the color image and the high resolution depth map. The downsampled depth map is at a resolution less than the first resolution.

UPSCALING DEVICE, UPSCALING METHOD, AND UPSCALING PROGRAM
20220318951 · 2022-10-06 ·

Provided are an upscaling device, an upscaling method, and an upscaling program capable of keying processing at low cost and with high accuracy. An upscaling device includes an image obtaining section configured to obtain a high-resolution input image illustrating a background and a foreground, a low-resolution input image obtained by converting the high-resolution input image into a low-resolution image, and a low-resolution alpha image illustrating a region of the foreground in the low-resolution input image; a correlation obtaining section configured to obtain correlation between grayscale information of the low-resolution input image and an alpha value of the low-resolution alpha image for each corresponding pixel pair of the low-resolution input image and the low-resolution alpha image, and upscale the correlation into a resolution of the high-resolution input image; and an output section configured to output a high-resolution alpha image having a resolution same as that of the high-resolution input image on a basis of the high-resolution input image and the upscaled correlation.

GAMING SUPER RESOLUTION

A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate at least one linear down-sampled version of the input image via a linear upscaling network, generate at least one non-linear down-sampled version of the input image via a non-linear upscaling network, extract a first feature map from the at least one linear down-sampled version of the input image, and extract a second feature map from the at least one non-linear down-sampled version of the input image. The processor is also configured to convert the at least one linear down-sampled version of the input image and the at least one non-linear down-sampled version of the input image into pixels of an output image having a second resolution higher than the first resolution using the first feature map and the second feature map.

Method for generating a super-resolution image and related device

A method for generating a super-resolution image and related device is provided. In one aspect, the method comprises: receiving a first low-resolution image and a second low-resolution image, the first low-resolution image and second low-resolution image have a first spatial resolution and having been captured simultaneously by a pair of pixel arrays of a common image sensor, wherein the pixel arrays of the image sensor are located as to be diagonally shifted from each other by a sub-pixel increment; adaptively enhancing the first low-resolution and the second low-resolution images to generate an enhanced first low-resolution image and an enhanced second low-resolution image, respectively; mapping (e.g., non-uniformly) pixels of each of the enhanced first and second low-resolution images to a super-resolution grid having a spatial resolution greater than the first spatial resolution to generate a first intermediate super-resolution image and a second intermediate super-resolution image, respectively; and combining the first intermediate super-resolution image and second intermediate super-resolution image to generate a composite super-resolution image.

Method, device, and computer program for improving the reconstruction of dense super-resolution images from diffraction-limited images acquired by single molecule localization microscopy
11676247 · 2023-06-13 · ·

The invention relates to reconstructing a synthetic dense super-resolution image from at least one low-information-content image, for example from a sequence of diffraction-limited images acquired by single molecule localization microscopy. After having obtained such a sequence of diffraction-limited images, a sparse localization image is reconstructed from the obtained sequence of diffraction-limited images according to single molecule localization microscopy image processing. The reconstructed sparse localization image and/or a corresponding low-resolution wide-field image are input to an artificial neural network and a synthetic dense super-resolution image is obtained from the artificial neural network, the latter being trained with training data comprising triplets of sparse localization images, at least partially corresponding low-resolution wide-field images, and corresponding dense super-resolution images, as a function of a training objective function comparing dense super-resolution images and corresponding outputs of the artificial neural network.