Patent classifications
G06T5/94
IMAGE PROCESSING
A method of calibrating image data, the method comprising the steps of obtaining the image data and applying a shading correction mesh to the image data, wherein the shading correction mesh comprises a plurality of nodes, and is used to generate shading correction values for each pixel location in the image data. The blocks of the generated shading correction values are then grouped each group of blocks comprising a plurality of blocks, and each block comprising a plurality of pixel locations. An analysis of each of the groups of blocks of generated shading correction values is performed, and an updated shading correction mesh based on the analysis of the groups of one or more blocks is generated.
Method and apparatus for correction of an image
Disclosed is an apparatus comprising a processing device and a method for color correction of an image in a processing device, the method comprising: obtaining an image; determining a Laplacian matrix of the image; obtaining a first region of the image, the first region being indicative of a part of the image to be color corrected; obtaining a second region of the image; obtaining a first recoloring image based on the second region; determining a first corrected region of a first corrected image based on the Laplacian matrix and the first recoloring image; and obtaining and outputting a corrected image based on the first corrected region of the first corrected image.
Apparatus, system, and method for enhancing an image
Described herein is a method of enhancing an image includes determining a level of environmental artifacts at a plurality of positions on an image frame of image data. The method also includes adjusting local area processing of the image frame, to generate an adjusted image frame of image data, based on the level of environmental artifacts at each position of the plurality of positions. The method includes displaying the adjusted image frame.
Image processing method, image processor, image capturing device, and image capturing method for generating omnifocal image
A plurality of captured images is first acquired by capturing images of an object while changing a focal position along an optical axis. Then, variations in magnification among the captured images are acquired. On the basis of the variations in magnification, corresponding pixels in the captured images are specified, and definition is compared among the corresponding pixels. Then, an image reference value indicating the number of a captured image that is to be referenced as the luminance value of each coordinates in an omnifocal image is determined on the basis of the result of comparison of the definition. The omnifocal image is thereafter generated by referencing the luminance value in the captured image indicated by the image reference value for each coordinates. In this way, the omnifocal image that reflects the position and size of the object accurately can be generated.
Method, device, and system for enhancing changes in an image captured by a thermal camera
There is provided a method, a device (104), and a system (100) for enhancing changes in an image (103a) of an image sequence (103) captured by a thermal camera (102). An image (103a) which is part of the image sequence (103) is received (S02) and pixels (408) in the image that have changed in relation to another image (103b) in the sequence are identified (S04). Based on the intensity values of the identified pixels, a function (212, 212a, 212b, 212c, 212d, 212e) which is used to redistribute intensity values of changed as well as non-changed pixels in the image is determined (S06). The function has a maximum (601) for a first intensity value (602) in a range (514) of the intensity values of the identified pixels, and decays with increasing distance from the first intensity value.
Image details processing method, apparatus, terminal, and storage medium
Embodiments of the present disclosure disclose an image details processing method, comprises: obtaining each luminance data in the image; performing non-linear transformation on the each luminance data to obtain corresponding transformed data; performing the low frequency processing on the transformed data to obtain low frequency data; based on the low-frequency data and the transformed data, determining the corrected luminance data. The embodiments of the present disclosure also disclose an image details processing apparatus, a terminal, and a storage medium.
SUPPRESSION OF TAGGED ELEMENTS IN MEDICAL IMAGES
A mechanism for reducing the appearance of tagged elements in a medical image. This is achieved by processing the medical image to generate a separate, suppression image that contains only the tagged elements. The medical image and the suppression image are then combined to reduce the appearance of the tagged elements in the medical image. This can be achieved through modification of the suppressed image, before the combination, and/or weighting of the medical image and the suppression image during combination.
IMAGE PROCESSING DEVICE, IMAGING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
There are provided an image processing device, an imaging apparatus, an image processing method, and a program that are capable of performing high dynamic range processing that reflects a shooting intention of a photographer. An image processing device (31) includes a metering mode information acquiring unit (101) that acquires metering mode information indicating a metering mode set from among a plurality of metering modes; a target area setting unit (103) that sets, on the basis of the metering mode information, a target area that is to be used to calculate a representative luminance that is to be used in high dynamic range processing; a first luminance calculating unit (105) that calculates the representative luminance on the basis of luminance information of the target area set by the target area setting unit; and a first condition determining unit (107) that determines a first condition of the high dynamic range processing on the basis of the representative luminance calculated by the first luminance calculating unit.
IMAGE FUSION ARCHITECTURE
Embodiments relate to circuitry for performing fusion of two images captured with two different exposure times to generate a fused image having a higher dynamic range. Information about first keypoints is extracted from the first image by processing pixel values of pixels in the first image. A model describing correspondence between the first image and the second image is then built by processing at least the information about first keypoints. A processed version of the first image is warped using mapping information in the model to generate a warped version of the first image spatially more aligned to the second image than to the first image. The warped version of the first image is fused with a processed version of the second image to generate the fused image.
Image generating apparatus for combining plural images based on different shutter times
In an image generating apparatus, a setter is configured to variably set, for an image capturing task at a next capturing cycle, first and second shutter times and a total gain. The total gain is based on combination of an analog gain and a digital gain. An allocating unit is configured to obtain a threshold gain based on the first and second shutter times and a compression characteristic. The allocating unit is configured to variably allocate the total gain to at least one of the analog gain and the digital gain in accordance with a comparison among the total gain, the threshold gain, and an upper limit for the analog gain.