H04N9/76

Eliminating image artifacts using image-layer snapshots
11468549 · 2022-10-11 · ·

Methods and systems disclosed herein relate generally to systems and methods for using image-layer snapshots to eliminate image artifacts. A pixel-adjustment module receives an indication of the selected region within a first image layer of an image. In response, the pixel-adjustment module generates a first snapshot of the first image layer, in which the first snapshot includes pixel data for restoring a first state of the first image layer at which the selected region is yet to be modified. The pixel-adjustment module generates a second image layer, in which the image structure-modification operation is applied to the pixel of the second image layer that corresponds to the selected region. The pixel-adjustment module modifies a pixel in the selected region to include at least part of the pixel data from the first snapshot.

Determination of the image depth map of a scene

A method for estimating the image depth map of a scene, includes the following steps: providing (E1) an image, the focus of which depends on the depth and wavelength of the considered object points of the scene, using a longitudinal chromatic optical system; determining (E2) a set of spectral images from the image provided by the longitudinal chromatic optical system; deconvoluting (E3) the spectral images to provide estimated spectral images with field depth extension; and analyzing (E4) a cost criterion depending on the estimated spectral images with field depth extension to provide an estimated depth map.

Determination of the image depth map of a scene

A method for estimating the image depth map of a scene, includes the following steps: providing (E1) an image, the focus of which depends on the depth and wavelength of the considered object points of the scene, using a longitudinal chromatic optical system; determining (E2) a set of spectral images from the image provided by the longitudinal chromatic optical system; deconvoluting (E3) the spectral images to provide estimated spectral images with field depth extension; and analyzing (E4) a cost criterion depending on the estimated spectral images with field depth extension to provide an estimated depth map.

GENERATING A COMPOSITE IMAGE FROM A PHYSICAL ITEM
20170359560 · 2017-12-14 · ·

A computer-implemented method includes capturing, with a camera, a first image of a physical item at a first camera position, detecting borders associated with the physical item, based on the first image, generating an overlay that includes a plurality of objects that are positioned within one or more of the borders associated with the physical item, capturing, with the camera, subsequent images of the physical item, where each subsequent image is captured with a respective subsequent camera position, and during capture of the subsequent images, displaying an image preview that includes the overlay. The method further includes establishing correspondence between pixels of the first image and pixels of each of the subsequent images and generating a composite image of the physical item, where each pixel value of the composite image is based on corresponding pixel values of the first image and the subsequent images.

GENERATING A COMPOSITE IMAGE FROM A PHYSICAL ITEM
20170359560 · 2017-12-14 · ·

A computer-implemented method includes capturing, with a camera, a first image of a physical item at a first camera position, detecting borders associated with the physical item, based on the first image, generating an overlay that includes a plurality of objects that are positioned within one or more of the borders associated with the physical item, capturing, with the camera, subsequent images of the physical item, where each subsequent image is captured with a respective subsequent camera position, and during capture of the subsequent images, displaying an image preview that includes the overlay. The method further includes establishing correspondence between pixels of the first image and pixels of each of the subsequent images and generating a composite image of the physical item, where each pixel value of the composite image is based on corresponding pixel values of the first image and the subsequent images.

METHODS AND SYSTEMS FOR COMBINING FOREGROUND VIDEO AND BACKGROUND VIDEO USING CHROMATIC MATCHING
20170359559 · 2017-12-14 ·

Disclosed herein are methods and systems for combining foreground video and background video using chromatic matching. In an embodiment, a system obtains foreground video data. The system obtains background video data. The system determines a color-distribution dimensionality of the background video data to be either high-dimensional chromatic or low-dimensional chromatic. The system selects a chromatic-adjustment technique from a set of chromatic-adjustment techniques based on the determined color-distribution dimensionality of the background video data. The system adjusts the foreground video data using the selected chromatic-adjustment technique. The system generates combined video data at least in part by combining the background video data with the adjusted foreground video data. The system outputs the combined video for display.

Dual camera module including hyperspectral camera module, apparatuses including dual camera module, and method of operating the same

A dual camera module including a hyperspectral camera module, an apparatus including the same, and a method of operating the apparatus are provided. The dual camera module includes a hyperspectral camera module configured to provide a hyperspectral image of a subject; and an RGB camera module configured to provide an image of the subject, and obtain an RGB correction value applied to correction of the hyperspectral image.

Dual camera module including hyperspectral camera module, apparatuses including dual camera module, and method of operating the same

A dual camera module including a hyperspectral camera module, an apparatus including the same, and a method of operating the apparatus are provided. The dual camera module includes a hyperspectral camera module configured to provide a hyperspectral image of a subject; and an RGB camera module configured to provide an image of the subject, and obtain an RGB correction value applied to correction of the hyperspectral image.

Deinterleaving interleaved high dynamic range image by using YUV interpolation

Systems and methods for generating high dynamic images from interleaved Bayer array data with high spatial resolution and reduced sampling artifacts. Bayer array data are demosaiced into components of the YUV color space before deinterleaving. The Y component and the UV components can be derived from the Bayer array data through demosiac convolution processes. A respective convolution is performed between a convolution kernel and a set of adjacent pixels of the Bayer array that are in the same color channel. A convolution kernel is selected based the mosaic pattern of the Bayer array and the color channels of the set of adjacent pixels. The Y data and UV data are deinterleaved and interpolated into frames of short exposure and long exposures in the second color space. The short exposure and long exposure frames are then blended and converted back to a RGB frame representing a high dynamic range image.

Deinterleaving interleaved high dynamic range image by using YUV interpolation

Systems and methods for generating high dynamic images from interleaved Bayer array data with high spatial resolution and reduced sampling artifacts. Bayer array data are demosaiced into components of the YUV color space before deinterleaving. The Y component and the UV components can be derived from the Bayer array data through demosiac convolution processes. A respective convolution is performed between a convolution kernel and a set of adjacent pixels of the Bayer array that are in the same color channel. A convolution kernel is selected based the mosaic pattern of the Bayer array and the color channels of the set of adjacent pixels. The Y data and UV data are deinterleaved and interpolated into frames of short exposure and long exposures in the second color space. The short exposure and long exposure frames are then blended and converted back to a RGB frame representing a high dynamic range image.