Patent classifications
H04N23/741
Method for intelligent frame capture for high-dynamic range images
A method for processing images by an information handling system includes receiving image data including first frames captured using a first exposure at a first frame rate and second frames captured using a second exposure at a second frame rate. The first frame rate is greater than the second frame rate. The method includes merging each second frame of the second frames with each first frame of a corresponding plurality of the first frames to generate a corresponding plurality of merged frames. Each merged frame of the corresponding plurality of merged frames may have a dynamic range of tonal values greater than each dynamic range of tonal values of each frame merged to form the merged frame.
Method for intelligent frame capture for high-dynamic range images
A method for processing images by an information handling system includes receiving image data including first frames captured using a first exposure at a first frame rate and second frames captured using a second exposure at a second frame rate. The first frame rate is greater than the second frame rate. The method includes merging each second frame of the second frames with each first frame of a corresponding plurality of the first frames to generate a corresponding plurality of merged frames. Each merged frame of the corresponding plurality of merged frames may have a dynamic range of tonal values greater than each dynamic range of tonal values of each frame merged to form the merged frame.
Image capturing method and terminal device
An image capturing method and a terminal device are provided. The method includes entering a camera application to start a lens and display a viewfinder interface, converting an original image captured by the lens into a red-green-blue (RGB) image, and decreasing luminance of the RGB image to be less than first luminance or increasing the luminance of the RGB image to be greater than second luminance, to obtain a first image; converting the RGB image into N frames of high-dynamic-range (HDR) images, and fusing color information of pixels in any same location on the first image and the N frames of HDR images to obtain a final image.
Surgical camera system with high dynamic range
An endoscopic camera device having an optical assembly; a first image sensor in optical communication with the optical assembly, the first image sensor receiving a first exposure and transmitting a first low dynamic range image; a second image sensor in optical communication with the optical assembly, the second image sensor receiving a second exposure and transmitting a second low dynamic range image, the second exposure being higher than the first exposure; and a processor for receiving the first low dynamic range image and the second low dynamic range image; wherein the processor is configured to combine the first low dynamic range image and the second dynamic range image into a high dynamic range image using a luminosity value derived as a preselected percentage of a cumulative luminosity distribution of at least one of the first low dynamic range image and the second low dynamic range image.
SIMPLE BUT VERSATILE DYNAMIC RANGE CODING
For obtaining an good yet easy to use luminance dynamic range conversion, we describe an image color processing apparatus (200) arranged to transform an input color (R,G,B) of a pixel of an input image (Im_in) having a first luminance dynamic range into an output color (Rs, Gs, Bs) of a pixel of an output image (Im_res) having a second luminance dynamic range, which first and second dynamic ranges differ in extent by at least a multiplicative factor 2, comprising: a maximum determining unit (101) arranged to calculate a maximum (M) of color components of the input color, the color components at least comprising a red, green and blue component; —a uniformization unit (201) arranged to apply a function (FP) to the maximum (M) as input, which function has a logarithmic shape and was predetermined to be of a fixed shape enabling to transform a linear input to a more perceptually uniform output variable (u); a function application unit (203) arranged to receive a functional shape of a function, which was specified previously by a human color grader, and apply the function to the uniform output variable (u), yielding a transformed uniform value (TU); a linearization unit (204) arranged to transform the transformed uniform value (TU) to a linear domain value (LU); a multiplication factor determination unit (205) arranged to determine a multiplication factor (a) being equal to the linear domain value (LU) divided by the maximum (M); and a multiplier (104) arranged to multiply at least three linear color components (R,G,B) by the multiplication factor (a), yielding the output color.
SYSTEMS AND METHODS FOR CAPTURING DIGITAL IMAGES
A system, method, and computer program product are provided for capturing digital images. In use, at least one ambient exposure parameter is determined, and at least one flash exposure parameter based on the at least one ambient exposure parameter is determined. Next, via at least one camera module, an ambient image is captured according to the at least one ambient exposure parameter, and, via the at least one camera module, a flash image is captured according to the at least one flash exposure parameter. The captured ambient image and the captured flash image are stored. Lastly, the captured ambient image and the captured flash image are combined to generate a first merged image. Additional systems, methods, and computer program products are also presented.
ENCODING, DECODING, AND REPRESENTING HIGH DYNAMIC RANGE IMAGES
Techniques are provided to encode and decode image data comprising a tone mapped (TM) image with HDR reconstruction data in the form of luminance ratios and color residual values. In an example embodiment, luminance ratio values and residual values in color channels of a color space are generated on an individual pixel basis based on a high dynamic range (HDR) image and a derivative tone-mapped (TM) image that comprises one or more color alterations that would not be recoverable from the TM image with a luminance ratio image. The TM image with HDR reconstruction data derived from the luminance ratio values and the color-channel residual values may be outputted in an image file to a downstream device, for example, for decoding, rendering, and/or storing. The image file may be decoded to generate a restored HDR image free of the color alterations.
Systems and Methods for High Dynamic Range Imaging Using Array Cameras
Systems and methods for high dynamic range imaging using array cameras in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a method of generating a high dynamic range image using an array camera includes defining at least two subsets of active cameras, determining image capture settings for each subset of active cameras, where the image capture settings include at least two exposure settings, configuring the active cameras using the determined image capture settings for each subset, capturing image data using the active cameras, synthesizing an image for each of the at least two subset of active cameras using the captured image data, and generating a high dynamic range image using the synthesized images.
Image alignment for computational photography
Image frames for computational photography may be corrected, such as through rolling shutter correction (RSC), prior to fusion of the image frames to reduce wobble and jitter artifacts present in a video sequence of HDR-enhanced image frames. First and second motion data regarding motion of the image capture device may be determined for times corresponding to the capturing of the first and second image frames, respectively. The rolling shutter correction (RSC) may be applied to the first and second image frames based on both the first and second motion data. The corrected first and second image frames may then be aligned and fused to obtain a single output image frame with higher dynamic range than either of the first or second image frames.
Methods and systems for image processing with multiple image sources
Various methods and systems are provided for image processing for multiple cameras. In one embodiment, a method comprises acquiring image frames with a plurality of image frame sources configured with different acquisition settings, processing the image frames based on the different acquisition settings to generate at least one final image frame, and outputting the at least one final image frame. In this way, information from different image frame sources such as cameras may be leveraged to achieve increased frame rates with improved image quality and a desired motion appearance.