H04N1/6005

F-MODE DISPLAY OF ULTRASOUND IMAGES FOR CHARACTERIZATION OF TISSUE STRUCTURES AND PROPERTIES
20230117281 · 2023-04-20 ·

A method for generating F-mode display of an ultrasound image includes: collecting ultrasound image data that include a plurality of physical properties; mapping the physical properties to a plurality of color components of a color space; and combining the color components to generate an F-mode display of the ultrasound image. Each of the physical properties corresponds a corresponding color component.

CALCULATING COLOR DATA FOR INSPECTION BY CONVERTING NORMAL AND SPOT COLOR PRINT DATA INTO COLOR SPACE OF DECREASED COLORS AND COMBINING CONVERTED COLOR DATA
20230117852 · 2023-04-20 · ·

An image processing apparatus includes a processor configured to calculate color data for inspection by converting printing data that is data as a base of an image to be printed and includes a normal color which is a normally used color and a spot color which is a color other than the normal color, into color data of another color space in which the number of colors is decreased, for each of the printing data of the normal color and the printing data of the spot color and then, combining the color data after the conversion, and output the calculated color data for inspection.

Method and system for image enhancement

A method for image processing, which comprises the following steps: Generating a first histogram from a first image; Calculating a first parameter profile from the first image indicative of the quality of the first image; Adjusting the first parameter profile to generate a second parameter profile; Using the second parameter profile to generate a statistical distribution via a statistical distribution generator, wherein the statistical distribution is characterized by at least three parameters; Using the statistical distribution to perform a histogram specification to the first histogram of the first image to generate a second histogram; Generating a second image based on the first image and the second histogram.

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
20170366709 · 2017-12-21 · ·

The image processing apparatus (10) includes an image matching unit (14) that performs a process of matching a positional relationship between read image data which is any one of the first read image data (24) and second read image data obtained by color conversion of the first read image data (24) and original document image data (20) of the target printed matter (22); a statistical processing unit (16) that generates statistical information that reflects a distribution of read image signal values of the read image data in each image region of the read image data corresponding to an image region having the same original document image signal values in the original document image data (20); and a mismatching detection unit (18) that detects color mismatching between the original document image data (20) and the target printed matter (22) on the basis of the statistical information.

CORRECTION OF COLOR TINTED PIXELS CAPTURED IN LOW-LIGHT CONDITIONS
20230199133 · 2023-06-22 ·

Aspects of the present disclosure relate to color correction in image processing pipelines. An example method may include receiving first image data corresponding to reference luminance data and reference chrominance data for each of a plurality of pixels, determining that the first image data corresponds to a raw image captured in a dark environment, generating second image data by performing one or more tone mapping operations on the first image data, the second image data corresponding to current luminance data and current chrominance data for each of the plurality of pixels, and generating output image data. For each pixel of the plurality of pixels, the output data may include an output luminance value of a corresponding pixel of the current luminance data, and chrominance values of the corresponding pixel from a selected one of the reference chrominance data and the current chrominance data, the selection based at least in part on the reference chrominance data and the current chrominance data.

THREE DIMENSIONAL, HUE-PLANE PRESERVING AND DIFFERENTIABLE QUASI-LINEAR TRANSFORMATION METHOD FOR COLOR CORRECTION
20170359487 · 2017-12-14 ·

Color correction methods relate device dependent sensor responses (RGB) to device independent color values (XYZ). The present invention discloses a new approach to Hue Plane Preserving Color Correction (HPPCC) using weighted constrained 3×3 matrices. Accordingly, the methods of the present invention employ hue angle specific weighted matrixing. Given a device RGB from which a device hue angle is derived, a corresponding transformation matrix is found as the normalized weighted sum of all pre-calculated constrained white point and training color preserving matrices. Each weight is calculated as a power function of the minimum difference between the device and the training color hue angle. The weighting function provides local influence to the matrices that are in close hue angle proximity to the device color. The power of the function can be further optimized for global accuracy. The methods of the present invention are termed HPPCC-WCM for Hue Plane Preserving Color Correction Weighted Constrained Methods. Experiments performed using different input spectra demonstrate that the claimed methods consistently improve the stability and accuracy of state-of-the-art methods for color correction.

METHOD OF MAPPING SOURCE COLORS OF AN IMAGE USING A LUT HAVING COLORS OUTSIDE A SOURCE COLOR GAMUT
20170359491 · 2017-12-14 ·

According to this method, the mapping color LUT has input colors that sample not only a source color gamut (included in an input encoding color space in which inputs colors of this LUT are encoded) but also a portion of the input encoding color space which is not included in the source color gamut. Preferably, this color LUT further includes output colors located outside the target color gamut. Accuracy of the mapping is improved, notably for source colors located near the boundary of the source color gamut.

Dynamic Global Tone Mapping with Integrated 3D Color Look-up Table

The processing of RGB image data can be optimized by performing optimization operations on the image data when it is converted into the YCbCr color space. First, a raw RGB color space is converted into a YCbCr color space, and raw RGB image data is converted into YCbCr image data using the YCbCr color space. For each Y-layer of the YCbCr image data, a 2D LUT is generated. The YCbCr image data is converted into optimized CbCr image data using the 2D LUTs, and optimized YCbCr image data is generated by blending CbCr image data corresponding to multiple Y-layers. The optimized YCbCr image data is converted into sRGB image data, and a tone curve is applied to the sRGB image data to produce optimized sRGB image data.

3D Color Mapping and Tuning in an Image Processing Pipeline

The processing of RGB image data can be optimized by performing optimization operations on the image data when it is converted into the YCbCr color space. First, a raw RGB color space is converted into a YCbCr color space, and raw RGB image data is converted into YCbCr image data using the YCbCr color space. For each Y-layer of the YCbCr image data, a 2D LUT is generated. The YCbCr image data is converted into optimized CbCr image data using the 2D LUTs, and optimized YCbCr image data is generated by blending CbCr image data corresponding to multiple Y-layers. The optimized YCbCr image data is converted into sRGB image data, and a tone curve is applied to the sRGB image data to produce optimized sRGB image data.

SYSTEMS AND METHODS FOR FACILITATING TRACKING A TARGET IN AN IMAGED SCENE
20170345156 · 2017-11-30 · ·

In systems and methods disclosed herein, a transform may be determined for a given target object and background in an image between a native color space of the image and a second multi-dimensional color space representing a preferred color space for a tracker. The transform may then be applied to convert a plurality of pixels in the imaged scene to the second multi-dimensional color space thereby advantageously increasing a perceived contrast by the tracker between the target object and the background.