H04N23/815

Image frame processing method
20170339345 · 2017-11-23 ·

The present disclosure discloses an image frame processing method for processing a plurality of input image frames with an image processing device. An embodiment of the method comprises: receiving a plurality of input image frames; and processing the plurality of input image frames to produce a first number of first output image frames and a second number of second output image frames, in which the resolution of the first output image frames is higher than the resolution of the second output image frames and the first number is less than the second number, wherein a first frame of the first output image frames and a second frame of the second output image frames are derived from the same one of the plurality of input image frames.

Increasing dynamic range of a virtual production display

The processor obtains a third pixel value and a second pixel value of the display. The processor determines a desired pixel value range that exceeds the second pixel value of the display. The processor obtains a threshold between the third pixel value of the display and the second pixel value of the display. The processor obtains a function mapping the desired pixel value range to a range between the threshold and the second pixel value. The processor applies the first function to an input image prior to displaying the input image on the display. The display presents the image. Upon recording the presented image, the processor determines a region within the recorded image having a pixel value between the threshold and the second pixel value. The processor increases dynamic range of the recorded image by applying an inverse of the function to the pixel value of the region.

INCREASING DYNAMIC RANGE OF A VIRTUAL PRODUCTION DISPLAY

A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.

Information processing apparatus, information processing method, and program

An information processing apparatus includes a first optical system, a second optical system, and a casing. The first optical system is configured to input light into a first imaging device. The second optical system is configured to input light into a second imaging device. The casing includes one surface long in a specific direction with the first optical system and the second optical system being arranged in the one surface in an orthogonal direction almost orthogonal to the specific direction. The first optical system and the second optical system are arranged such that an optical axis of the first optical system and an optical axis of the second optical system form an angle in the specific direction.

Computing virtual screen imagery based on a stage environment, camera position, and/or camera settings

Methods and systems are presented for generating a virtual scene rendering usable in a captured scene based on a camera position of a camera in a stage environment, a mapping of a plurality of subregions of a virtual scene display in the stage environment to corresponding positions in the stage environment, and details of a virtual scene element. The details might include a subregion of the plurality of subregions for the virtual scene element where on the virtual scene display the given virtual scene element would, at least in part, appear, and stage subregion depth values. A blur factor for a corresponding subregion might be determined based at least in part on the stage subregion depth value and the virtual subregion depth value. Rendering the virtual scene might take into account the blur factor for the given virtual scene element.

IMAGE CAPTURING APPARATUS AND CONTROL METHOD THEREOF
20170310878 · 2017-10-26 ·

An image capturing apparatus having an image sensor, that reads out an image signal from pixels of the image sensor, controls a region in which a plurality of image signals having different viewpoints are read out from pixels of the image sensor, acquires depth information using the image signals read out from the region, and records image information in which the image signals, the depth information and information regarding the region are associated with each other.

Image-capturing device and image processing method
09794477 · 2017-10-17 · ·

An image-capturing device includes: a photographic optical system; a photoelectric conversion element array made up with a plurality of photoelectric conversion elements arrayed therein; a micro-lens array made up with a plurality of micro-lenses arrayed therein; a data creation unit that creates pixel data at a plurality of pixels on a specific image forming plane by applying filter matrix data to output signals provided from the plurality of photoelectric conversion elements; and an image synthesis unit that synthetically generates an image on the specific image forming plane at a given position assumed along an optical axis of the photographic optical system, based upon the pixel data. The filter matrix data assume a two-dimensional data array pattern conforming to a specific intensity distribution with a distribution center thereof set at an element corresponding to a central position of a projection image of each of the plurality of pixels.

Image pickup element, imaging device, and imaging method

In order to improve imaging performance, an imaging apparatus is provided to include an image capturing unit configured to detect incident light and generate a raw image data, a compression unit configured to compress the raw image data to generate a coded data having a data amount smaller than that of the raw image data, and an output unit configured to output the coded data to a processing unit for processing the coded data. Furthermore, the image capturing unit, the compression unit, and the output unit are configured to be within a same semiconductor package.

IMAGE RECORDING APPARATUS AND METHOD FOR CONTROLLING THE SAME
20170295344 · 2017-10-12 ·

An image recording apparatus generates a reduced image by reducing an image, generates a first cutout image by cutting out a part of the image before the reduction, performs processing for image recording involving writing into a memory on the reduced image and performs recording processing for recording the processed image, presents a first display by outputting an image based on the reduced image to a display unit, presents an enlarged display larger than the first display by outputting an image based on the first cutout image to the display unit during the recording processing, and does not perform specific processing involving reading or the writing of data from or into the memory at least when the enlarged display is ongoing during the recording processing.

Image processing device and operating method thereof
11669935 · 2023-06-06 · ·

An image processing device includes: an image sensor for acquiring a pixel value of each of a plurality pixels; and a controller for acquiring a pattern image including the pixel value of each of the plurality of pixels and an exposure value representing an exposure time, generating a plurality of super resolution images based on pixels having the same exposure value among the plurality of pixels included in the pattern image, generating a motion map, which represents a motion of an object based on a ratio of exposure values of pixels at a selected position among a plurality of pixels included in the plurality of super resolution images and a ratio of pixel values of the pixels at the selected position, and generating a target image according to a weighted sum of the plurality of super resolution images and the motion map.