Patent classifications
G06T2207/10144
Unified bracketing approach for imaging
Devices, methods, and computer-readable media are disclosed describing an adaptive approach for image bracket selection and fusion, e.g., to generate low noise and high dynamic range (HDR) images in a wide variety of capturing conditions. An incoming image stream may be obtained from an image capture device, wherein the incoming image stream comprises a variety of differently-exposed captures, e.g., EV0 images, EV− images, EV+ images, long exposure (or synthetic long exposure) images, EV0/EV− image pairs, etc., which are received according to a particular pattern. When a capture request is received, a set of rules and/or a decision tree may be used to evaluate one or more capture conditions associated with the images from the incoming image stream and determine which two or more images to select for a fusion operation. A noise reduction process may optionally be performed on the selected images before (or after) the registration and fusion operations.
Target Recognition Method and Apparatus
A target recognition method includes performing first image processing on first image data, to obtain first artificial intelligence (AI) input data; performing second image processing on second image data, to obtain second AI input data, where exposure duration corresponding to the first AI input data is different from exposure duration corresponding to the second AI input data, or a dynamic range corresponding to the first AI input data is different from a dynamic range corresponding to the second AI input data, and both the first image data and the second image data are raw image data generated by an image sensor; and performing target recognition based on the first AI input data and the second AI input data, and determining target information.
Synchronized spinning LIDAR and rolling shutter camera system
One example system comprises a LIDAR sensor that rotates about an axis to scan an environment of the LIDAR sensor. The system also comprises one or more cameras that detect external light originating from one or more external light sources. The one or more cameras together provide a plurality of rows of sensing elements. The rows of sensing elements are aligned with the axis of rotation of the LIDAR sensor. The system also comprises a controller that operates the one or more cameras to obtain a sequence of image pixel rows. A first image pixel row in the sequence is indicative of external light detected by a first row of sensing elements during a first exposure time period. A second image pixel row in the sequence is indicative of external light detected by a second row of sensing elements during a second exposure time period.
MERGING MULTIPLE EXPOSURES TO GENERATE A HIGH DYNAMIC RANGE IMAGE
A method of generating a high dynamic range (HDR) image is provided that includes capturing a long exposure image and a short exposure image of a scene, computing a merging weight for each pixel location of the long exposure image based on a pixel value of the pixel location and a saturation threshold, and computing a pixel value for each pixel location of the HDR image as a weighted sum of corresponding pixel values in the long exposure image and the short exposure image, wherein a weight applied to a pixel value of the pixel location of the short exposure image and a weight applied to a pixel value of the pixel location in the pixel long exposure image are determined based on the merging weight computed for the pixel location and responsive to motion in a scene of the long exposure image and the short exposure image.
AMPLIFIER GLOW REDUCTION
An efficient tool to remove amplifier glow from low-light and long-exposure digital images, without sacrificing the useful signal contained in these images. This is particularly useful in deep space imagery, where long exposure times are common, and wherein the darkness of the capture images further highlights the effects of amplifier glow.
ELECTRONIC DEVICE PROVIDING IMAGE-BASED IMAGE EFFECT AND METHOD FOR CONTROLLING THE SAME
An electronic device may include a camera, a display, and at least one processor. The at least one processor may be configured to display a first image obtained through the camera in a first area of the display, identify a plurality of areas included in the first image, identify a plurality of image effects applicable to the plurality of areas, display a plurality of second images to which the plurality of image effects are applied, respectively, in a second area adjacent to the first area, and display a third image resulting from applying an image effect corresponding to an image selected from among the plurality of second images to the first image.
Multi frequency long range distance detection for amplitude modulated continuous wave time of flight cameras
A time of flight (ToF) system includes a light source, a photosensor, a signal generator and a processor. The signal generator outputs a reference signal corresponding to a modulation function for modulated light and a modified transmitted light signal corresponding to a phase shift of the reference signal. The light source outputs the modified transmitted light signal and pixels in the photosensor receives its reflections off the scene. The reference signal is applied to the pixels and the processor determines a depth map for the scene based on values recorded by the pixels. In some examples, the phase shift is implemented using a phase locked loop controller. One or more component phases of the phase shift and an exposure time for each component phase are determined and output by the phase locked loop controller.
HIGH DYNAMIC RANGE IMAGE SYNTHESIS METHOD AND ELECTRONIC DEVICE
In the technical solutions of a high dynamic range image synthesis method and an electronic device provided in embodiments of this application, a plurality of images with different depths of field in a current photographing scene are obtained based on an HDR photographing operation entered by a user, and each image has an exposure value. A plurality of images with a same exposure value are synthesized to generate a full depth-of-field image. Full depth-of-field images with a plurality of exposure values are synthesized by using an HDR algorithm to generate a high dynamic range image. Therefore, a high dynamic range image that is clear at each depth of field can be obtained, and a problem that a shallow depth of field leads to a blurred background and an insufficient dynamic range, and then results in overexposure or underexposure of a high dynamic range image can be resolved.
IMAGING DEVICE, IMAGING METHOD, AND IMAGING PROGRAM
Provided are an imaging device, an imaging method, and an imaging program capable of easily acquiring a slow moving image with good image quality. In one aspect of the present invention, an imaging device includes an optical system, an imaging element, and a processor, and the processor performs detection processing of detecting a movement of a subject based on an image signal output from the imaging element, frame rate control of increasing a frame rate of a moving image output from the imaging element based on the detected movement, exposure control processing of maintaining a rate of an exposure time per frame of the moving image constant according to the increase in the frame rate, and dimming control processing of changing a degree of dimming of the optical system according to the exposure control processing.
HIGH DYNAMIC RANGE IMAGE PROCESSING
Systems and techniques are described for generating a high dynamic range (HDR) image. An imaging system can be configured to receive a first image captured by an image sensor according to a first exposure time. The imaging system can generate a modified image based on the first image by modifying the first image using a gain setting to simulate a second exposure time based on exposure compensation. The imaging system generates a high dynamic range (HDR) image at least in part by merging multiple images. The multiple images include a second-exposure image that corresponds to the second exposure time. The second-exposure image can be the modified image. The second-exposure image can be based on the modified image, processed variant of the modified image processed for noise reduction based on one or more other images actually captured using the second exposure time.