H04N9/77

METHODS FOR CONVERTING AN IMAGE AND CORRESPONDING DEVICES
20230050498 · 2023-02-16 ·

The invention concerns a method for converting an input image comprising an input luminance component made of elements into an output image comprising an output luminance component made of elements, the respective ranges of the output luminance component values and input luminance component element values being of different range extension. the method comprises for the input image: computing a value of a general variable representative of at least two input luminance component element values; transforming each input luminance component element value into a corresponding output luminance component element value according to the computed general variable value; and converting the input image using the determined output luminance component element values. The transforming step uses a set of pre-determined output values organized into a 2D Look-Up-Table (2D LUT) comprising two input arrays indexing a set of chosen input luminance component values and a set of chosen general variable values respectively, each pre-determined output value matching a pair of values made of an indexed input luminance component value and an indexed general variable value, the input luminance component element value being transformed into the output luminance component element value using at least one predetermined output value.

CAMERA AND IMAGE OBTAINING METHOD
20230049248 · 2023-02-16 ·

A first light beam can be collected by using an optical module. A second light beam can be obtained based on the first light beam. An image sensor can perform photoelectric conversion on the infrared light beam that is in the second light beam and that is irradiated to the first channel to obtain a first electrical signal. Photoelectric conversion can be performed on the visible light beam that is in the second light beam and that is irradiated to the second channel to obtain a second electrical signal. An initial image can be generated based on the first electrical signal and the second electrical signal. A color image and a grayscale image can be sent to an image processor. The image processor can receive the color image and the grayscale image. Fusion processing can be performed on the color image and the grayscale image to obtain a fused image.

CAMERA AND IMAGE OBTAINING METHOD
20230049248 · 2023-02-16 ·

A first light beam can be collected by using an optical module. A second light beam can be obtained based on the first light beam. An image sensor can perform photoelectric conversion on the infrared light beam that is in the second light beam and that is irradiated to the first channel to obtain a first electrical signal. Photoelectric conversion can be performed on the visible light beam that is in the second light beam and that is irradiated to the second channel to obtain a second electrical signal. An initial image can be generated based on the first electrical signal and the second electrical signal. A color image and a grayscale image can be sent to an image processor. The image processor can receive the color image and the grayscale image. Fusion processing can be performed on the color image and the grayscale image to obtain a fused image.

HDR color processing for saturated colors
11582434 · 2023-02-14 · ·

To mitigate some problems of the pixel color mapping being used in HDR video decoding of the type of SLHDR, a high dynamic range video encoding circuit (300) is taught, configured to encode a high dynamic range image (IM_HDR) of a first maximum pixel luminance (PB_C1), together with a second image (Im_LWRDR) of lower dynamic range and corresponding lower second maximum pixel luminance (PB_C2), the second image being functionally encoded as a luma mapping function (400) for decoders to apply to pixel lumas (Y_PQ) of the high dynamic range image to obtain corresponding pixel lumas (PO) of the second image, the encoder comprising a data formatter (304) configured to output to a video communication medium (399) the high dynamic range image and metadata (MET) encoding the luma mapping function (400), the functional encoding of the second image being based also on a color lookup table (CL(Y_PQ)) which encodes a multiplier constant (B) for all possible values of the pixel lumas of the high dynamic range image, and the formatter being configured to output this color lookup table in the metadata, characterized in that the high dynamic range video encoding circuit comprises: —a gain determination circuit (302) configured to determine a luma gain value (G_PQ) which quantifies a ratio of an output image luma for a luma position equal to a correct normalized luminance position divided by an output luma for the luma of the pixel of the high dynamic range image, wherein the high dynamic range video encoding circuit comprises a color lookup table determination circuit (303) configured to determine the color lookup table (CL(Y_PQ)) based on values of the luma gain value for various lumas of pixels present in the high dynamic range image. Similarly we teach how the same principles can be embodied in a SLHDR-type video decoder.

HDR color processing for saturated colors
11582434 · 2023-02-14 · ·

To mitigate some problems of the pixel color mapping being used in HDR video decoding of the type of SLHDR, a high dynamic range video encoding circuit (300) is taught, configured to encode a high dynamic range image (IM_HDR) of a first maximum pixel luminance (PB_C1), together with a second image (Im_LWRDR) of lower dynamic range and corresponding lower second maximum pixel luminance (PB_C2), the second image being functionally encoded as a luma mapping function (400) for decoders to apply to pixel lumas (Y_PQ) of the high dynamic range image to obtain corresponding pixel lumas (PO) of the second image, the encoder comprising a data formatter (304) configured to output to a video communication medium (399) the high dynamic range image and metadata (MET) encoding the luma mapping function (400), the functional encoding of the second image being based also on a color lookup table (CL(Y_PQ)) which encodes a multiplier constant (B) for all possible values of the pixel lumas of the high dynamic range image, and the formatter being configured to output this color lookup table in the metadata, characterized in that the high dynamic range video encoding circuit comprises: —a gain determination circuit (302) configured to determine a luma gain value (G_PQ) which quantifies a ratio of an output image luma for a luma position equal to a correct normalized luminance position divided by an output luma for the luma of the pixel of the high dynamic range image, wherein the high dynamic range video encoding circuit comprises a color lookup table determination circuit (303) configured to determine the color lookup table (CL(Y_PQ)) based on values of the luma gain value for various lumas of pixels present in the high dynamic range image. Similarly we teach how the same principles can be embodied in a SLHDR-type video decoder.

Signal reshaping for high dynamic range signals

In a method to improve backwards compatibility when decoding high-dynamic range images coded in a wide color gamut (WCG) space which may not be compatible with legacy color spaces, hue and/or saturation values of images in an image database are computed for both a legacy color space (say, YCbCr-gamma) and a preferred WCG color space (say, IPT-PQ). Based on a cost function, a reshaped color space is computed so that the distance between the hue values in the legacy color space and rotated hue values in the preferred color space is minimized HDR images are coded in the reshaped color space. Legacy devices can still decode standard dynamic range images assuming they are coded in the legacy color space, while updated devices can use color reshaping information to decode HDR images in the preferred color space at full dynamic range.

METHOD AND APPARTUS FOR OBTAINING A MAPPING CURVE PARAMETER
20230042923 · 2023-02-09 ·

A mapping curve parameter obtaining method and apparatus are described. The method includes obtaining a first mapping curve parameter set and first maximum target system display luminance, and obtaining a display luminance parameter set, where the display luminance parameter set includes maximum display luminance and/or minimum display luminance of a display device. The method also includes obtaining an adjustment coefficient set, where the adjustment coefficient set includes one or more adjustment coefficients, and the one or more adjustment coefficients correspond to one or more parameters in the first mapping curve parameter set. Furthermore, the method includes adjusting the one or more parameters in the first mapping curve parameter set based on the display luminance parameter set, the first maximum target system display luminance, and the adjustment coefficient set to obtain a second mapping curve parameter set, where the second mapping curve parameter set includes one or more adjusted parameters.

METHOD AND APPARTUS FOR OBTAINING A MAPPING CURVE PARAMETER
20230042923 · 2023-02-09 ·

A mapping curve parameter obtaining method and apparatus are described. The method includes obtaining a first mapping curve parameter set and first maximum target system display luminance, and obtaining a display luminance parameter set, where the display luminance parameter set includes maximum display luminance and/or minimum display luminance of a display device. The method also includes obtaining an adjustment coefficient set, where the adjustment coefficient set includes one or more adjustment coefficients, and the one or more adjustment coefficients correspond to one or more parameters in the first mapping curve parameter set. Furthermore, the method includes adjusting the one or more parameters in the first mapping curve parameter set based on the display luminance parameter set, the first maximum target system display luminance, and the adjustment coefficient set to obtain a second mapping curve parameter set, where the second mapping curve parameter set includes one or more adjusted parameters.

Image capturing method and terminal device

An image capturing method and a terminal device are provided. The method includes entering a camera application to start a lens and display a viewfinder interface, converting an original image captured by the lens into a red-green-blue (RGB) image, and decreasing luminance of the RGB image to be less than first luminance or increasing the luminance of the RGB image to be greater than second luminance, to obtain a first image; converting the RGB image into N frames of high-dynamic-range (HDR) images, and fusing color information of pixels in any same location on the first image and the N frames of HDR images to obtain a final image.

Image capturing method and terminal device

An image capturing method and a terminal device are provided. The method includes entering a camera application to start a lens and display a viewfinder interface, converting an original image captured by the lens into a red-green-blue (RGB) image, and decreasing luminance of the RGB image to be less than first luminance or increasing the luminance of the RGB image to be greater than second luminance, to obtain a first image; converting the RGB image into N frames of high-dynamic-range (HDR) images, and fusing color information of pixels in any same location on the first image and the N frames of HDR images to obtain a final image.