Patent classifications
H04N1/646
IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE, AND PROGRAM
An image processing method, including: by a processor: acquiring a fundus image; performing a first enhancement processing on an image of at least a central region of the fundus image, and performing a second enhancement processing, which is different from the first enhancement processing, on an image of at least a peripheral region of the fundus image that is at a periphery of the central region; and generating an enhanced image of the fundus image on the basis of a first image obtained as a result of the first enhancement processing having been performed and a second image obtained as a result of the second enhancement processing having been performed.
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD
An image processing apparatus includes an input unit configured to receive first material appearance data representing a material appearance of an image, a material appearance mapping unit configured to convert the first material appearance data into second material appearance data corresponding to a material appearance reproducible by a material appearance reproducing apparatus, and a conversion unit configured to convert the second material appearance data into control data for reproducing the material appearance of the image using the material appearance reproducing apparatus. The first material appearance data includes a gloss signal corresponding to a specular gloss and a gloss signal corresponding to an image clarity.
METHOD OF MAPPING SOURCE COLORS OF AN IMAGE USING A LUT HAVING COLORS OUTSIDE A SOURCE COLOR GAMUT
According to this method, the mapping color LUT has input colors that sample not only a source color gamut (included in an input encoding color space in which inputs colors of this LUT are encoded) but also a portion of the input encoding color space which is not included in the source color gamut. Preferably, this color LUT further includes output colors located outside the target color gamut. Accuracy of the mapping is improved, notably for source colors located near the boundary of the source color gamut.
Photo realistic rendering of smile image after treatment
A method may include: receiving facial image of the patient that depicts the patient's teeth; receiving a 3D model of the patient's teeth; determining color palette of the depiction of the patient's teeth; coding 3D model of the patient's teeth based on attributes of the 3D model; providing the 3D model, the color palette, and the coded 3D model to a neural network; processing the 3D model, the color palette, and the coded 3D model by the neural network to generate a processed image of the patient's teeth; simulating specular highlights on the processed image of the patient's teeth; and inserting the processed image of the patient's teeth into a mouth opening of the facial image.
Method, Apparatus and System for Determining a Luma Value
A method of determining luma values from 4:4:4 RGB video data for encoding chroma downsampled 4:2:0 YCbCr video data into a bitstream. Initial coefficents are determined for a region of a colour space the region being one of a plurality of regions located in the colour space and each region having a plurality of associated coefficients. The determined initial coefficients are applied to an initial image to produce a test image, the test image being a chroma downsampled 4:2:0 YCbCr version of the initial image. A measure of quality is determined by comparing the initial image and the test image. The determined initial coefficients are modified to increase the determined measure of quality. Luma values are determined from 4:4:4 RGB video data for encoding chroma downsampled 4:2:0 YCbCr video data into a bitstream using the modified coefficients.
CHANGE DEGREE DERIVING DEVICE, CHANGE DEGREE DERIVING SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
Provided is a change degree deriving device including a receiving unit that receives an image obtained by capturing a known color body and an object while focusing on the object, the known color body including plural of color samples, each of which has a known color numerical value, and a detection image, and a detecting unit that detects a focus deviation of the color samples in the image, based on the detection image.
IMAGE COMPRESSION
The invention provides methods that improve image compression and/or quality within the JPEG process by using a low-pass filter to remove high frequency components from image data, which removes blocking artifacts. Preferred embodiments apply the low-pass filter to the Chroma components after decompression prior to conversion into RGB color space.
Method and device for processing an image signal of an image sensor for a vehicle
A method includes obtaining, from an image sensor of a vehicle, an image signal that represents a vector of a plurality of input measured color values; determining weighting values as a function of the input measured values using a determination rule; ascertaining model matrices from a plurality of stored model matrices as a function of the weighting values, each of the model matrices generated for possible input measured values using a training rule; generating a color reconstruction matrix using the ascertained model matrices and at least one of the determined weighting values according to a generating rule; and applying the generated color reconstruction matrix to the measured color value vector in order to produce an output color vector that represents a processed image signal.
METHOD FOR CONVERTING AN IMAGE, DRIVER ASSISTANCE SYSTEM AND MOTOR VEHICLE
The invention relates to a method for converting (S1) an image (7) by means of an evaluation unit (4) of a motor vehicle (1), wherein the image (7) is captured from an environmental region (6) of a motor vehicle (1) by means of a camera (3) of the motor vehicle (1), and the image (7) includes an alpha channel (12) and at least one color channel in a predetermined color model, and the image (7) is converted into an alpha channel (12) and a Y channel (9) of a YUV color model and a U channel (10) of the YUV color model and a V channel (11) of the YUV color model, wherein in converting the image (7), the alpha channel (12) and the Y channel (9) and the U channel (10) and the V channel (11) are embedded in a converted image (8) of the image (7).
Dithering for chromatically subsampled image formats
Dithering techniques for images are described herein. An input image of a first bit depth is separated into a luma and one or more chroma components. A model of the optical transfer function (OTF) of the human visual system (HVS) is used to generate dither noise which is added to the chroma components of the input image. The model of the OTF is adapted in response to viewing distances determined based on the spatial resolution of the chroma components. An image based on the original input luma component and the noise-modified chroma components is quantized to a second bit depth, which is lower than the first bit depth, to generate an output dithered image.