Method and apparatus for processing image signal conversion, and terminal device
10616497 ยท 2020-04-07
Assignee
Inventors
- Meng Li (Beijing, CN)
- Hai Chen (Shenzhen, CN)
- Xiaozhen Zheng (Shenzhen, CN)
- Jianhua Zheng (Beijing, CN)
Cpc classification
G06F7/483
PHYSICS
H04N23/741
ELECTRICITY
H04N1/646
ELECTRICITY
International classification
G06F7/483
PHYSICS
Abstract
The present disclosure discloses a method and an apparatus for processing image signal conversion. In one example method, an input primary color signal is obtained. The primary color signal is a numeric value of an optical signal corresponding to an image. The primary color signal is proportional to light intensity. Conversion processing is performed on the primary color signal to obtain processed image information. The image information is a numeric expression value of the image. The conversion processing includes at least the following processing:
where a, b, m, and p are rational numbers, L is the input primary color signal, and L is the processed image information.
Claims
1. A method for processing image signal conversion, wherein the method comprises: obtaining an input primary color signal, wherein the input primary color signal is a numeric value of an optical signal corresponding to an image, and the input primary color signal is proportional to light intensity; and obtaining processed image information by performing conversion processing on the input primary color signal, wherein the processed image information is a numeric expression value of the image, and the conversion processing comprises at least the following processing:
2. The method according to claim 1, wherein the conversion processing comprises at least a scaling parameter a and a bias parameter b, and the scaling parameter and the bias parameter are used to control a shape of a conversion characteristic curve of the conversion processing.
3. The method according to claim 1, wherein the conversion processing comprises at least a scaling parameter a and a bias parameter b, and a+b=1.
4. The method according to claim 3, wherein the conversion processing is:
5. The method according to claim 1, wherein that a, b, m, and p are rational numbers comprises one of: a=1.12672, b=0.12672, m=0.14, and p=2.2; a=1.19996, b=0.19996, m=0.11, and p=1.1; a=1.17053, b=0.17053, m=0.12, and p=1.4; a=1.14698, b=0.14698, m=0.13, and p=1.8; a=1.11007, b=0.11007, m=0.15, and p=2.7; a=1.12762, b=0.127622, m=0.14, and p=2.3; a=1.13014, b=0.13014, m=0.14, and p=2.6; a=1.11204, b=0.112042, m=0.15, and p=3; and a=1.09615, b=0.0961462, m=0.16, and p=3.3.
6. The method according to claim 1, wherein that a, b, m, and p are rational numbers comprises one of: a=1.2441, b=0.2441, m=0.1, and p=1.1; a=1.20228, b=0.20228, m=0.11, and p=1.2; a=1.17529, b=0.17529, m=0.12, and p=1.7; a=1.14933, b=0.14933, m=0.13, and p=2; a=1.12762, b=0.12762, m=0.14, and p=2.3; a=1.11204, b=0.11204, m=0.15, and p=3; and a=1.09615, b=0.09615, m=0.16, and p=3.3.
7. The method according to claim 1, wherein the input primary color signal is a numeric value of a color component corresponding to specific color space.
8. The method according to claim 1, wherein the input primary color signal is a color component corresponding to specific color space, the color component comprising at least one of an R component, a G component, a B component, and a Y component.
9. The method according to claim 1, wherein the input primary color signal is a numeric value of a color component corresponding to specific color space, and the numeric value is expressed in a floating-point number, a half-precision floating-point number, or a fixed-point number.
10. The method according to claim 1, wherein the conversion processing is computation performed in normalized space [0,1].
11. The method according to claim 1, wherein at least one of the following: the input primary color signal is a numeric value of an optical signal corresponding to a photographing scene in a camera, and the image information is a linear numeric expression value used for recording an original optical signal of a scene image in the camera; the input primary color signal is a linear numeric expression value of an original optical signal of the image, and the image information is a non-linear numeric expression value of an image generated after conversion processing; and the input primary color signal is a first non-linear numeric expression value of the image, and the image information is a second non-linear numeric expression value of an image generated after conversion processing.
12. A method for processing image signal conversion, wherein the method comprises: obtaining input image information, wherein the input image information is a numeric expression value of an image; and performing conversion processing on the input image information to obtain an output primary color signal, wherein the output primary color signal is a value used by a display device to display a reference optical signal of the image, and the output primary color signal is proportional to light intensity; and the conversion processing comprises:
13. The method according to claim 12, wherein the conversion processing comprises at least a scaling parameter a and a bias parameter b, and the scaling parameter and the bias parameter are used to control a shape of a conversion characteristic curve of the conversion processing.
14. The method according to claim 12, wherein the conversion processing comprises at least a scaling parameter a and a bias parameter b, and a+b=1.
15. The method according to claim 14, wherein the conversion processing is:
16. The method according to claim 12, wherein that a, b, m, and p are rational numbers comprises one of: a=1.12672, b=0.12672, m=0.14, and p=2.2; a=1.19996, b=0.19996, m=0.11, and p=1.1; a=1.17053, b=0.17053, m=0.12, and p=1.4; a=1.14698, b=0.14698, m=0.13, and p=1.8; a=1.11007, b=0.11007, m=0.15, and p=2.7; a=1.12762, b=0.127622, m=0.14, and p=2.3; a=1.13014, b=0.13014, m=0.14, and p=2.6; a=1.11204, b=0.112042, m=0.15, and p=3; and a=1.09615, b=0.0961462, m=0.16, and p=3.3.
17. The method according to claim 12, wherein that a, b, m, and p are rational numbers comprises one of: a=1.2441, b=0.2441, m=0.1, and p=1.1; a=1.20228, b=0.20228, m=0.11, and p=1.2; a=1.17529, b=0.17529, m=0.12, and p=1.7; a=1.14933, b=0.14933, m=0.13, and p=2; a=1.12762, b=0.12762, m=0.14, and p=2.3; a=1.11204, b=0.11204, m=0.15, and p=3; and a=1.09615, b=0.09615, m=0.16, and p=3.3.
18. The method according to claim 12, wherein the output primary color signal is a numeric value of a color component corresponding to specific color space.
19. The method according to claim 12, wherein a color component, of the output primary color signal, corresponding to specific color space comprises at least one of an R component, a G component, a B component, and a Y component.
20. The method according to claim 12, wherein the output primary color signal is a numeric value of a color component corresponding to specific color space, and the numeric value is expressed in a floating-point number, a half-precision floating-point number, or a fixed-point number.
21. The method according to claim 12, wherein the conversion processing is computation performed in normalized space [0,1].
22. The method according to claim 12, wherein at least one of the following: the image information is a non-linear numeric expression value that is used to display the image and that is input to a display terminal device, and the output primary color signal is a numeric value of a corresponding optical signal in the display terminal device; the image information is a non-linear numeric expression value of the input image, and the output primary color signal is a linear numeric expression value; and the image information is a first non-linear numeric expression value of an image generated after conversion processing, and the output primary color signal is a second non-linear numeric expression value of the image.
23. An apparatus for processing image signal conversion, wherein the apparatus comprises: a receiver, the receiver configured to obtain an input primary color signal, wherein the input primary color signal is a numeric value of an optical signal corresponding to an image, and the input primary color signal is proportional to light intensity; and at least one processor, the at least one processor configured to obtain processed image information by performing conversion processing on the input primary color signal, wherein the processed image information is a numeric expression value of the image, and the conversion processing comprises at least the following processing:
24. An apparatus for processing image signal conversion, wherein the apparatus comprises: a receiver, the receiver configured to obtain input image information, wherein the input image information is a numeric expression value of an image; and at least one processor, the at least one processor configured to perform conversion processing on the input image information to obtain an output primary color signal, wherein the output primary color signal is a value used by a display device to display a reference optical signal of the image, the output primary color signal is proportional to light intensity, and the conversion processing comprises:
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) To describe the technical solutions in the embodiments of the present disclosure more clearly, the following briefly describes the accompanying drawings required for describing the embodiments. Apparently, the accompanying drawings in the following description show merely some embodiments of the present disclosure, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DESCRIPTION OF EMBODIMENTS
(10) The following clearly describes the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure.
(11) Referring to
(12) S201. A first terminal device performs optical-electro transfer on an input primary color signal by using a preset optical-electro transfer function, to obtain image information generated after optical-electro transfer.
(13) The terminal device may perform optical-electro transfer on the input primary color signal by using the preset optical-electro transfer function, to obtain the image information generated after optical-electro transfer. The terminal device may be a satellite, a personal computer (PC), a smartphone, or the like.
(14) In specific implementation, a quantization curve simulates a change in perceptive details of a human eye for different brightness. Based on statistics on a test sequence, it is learned that there is a relatively large difference between a brightness distribution curve of a real world and a curve that simulates how the human eye perceives brightness. For example, dynamic-range statistics are collected on an existing CT2020 HDR high-definition sequence. Six brightness intervals are obtained through division, to collect the statistics, and a statistical result is shown in Table 1.
(15) TABLE-US-00001 TABLE 1 Brightness interval (nits) 0~1000 1000~2000 2000~3000 3000~4000 >4000 Sequence A 99.849% 0.101% 0.038% 0.012% 0.000% Sequence B 99.938% 0.035% 0.015% 0.012% 0.000% Sequence C 80.851% 14.566% 3.329% 1.254% 0.000% Sequence D 92.156% 7.227% 0.388% 0.192% 0.038%
(16) It can be learned from Table 1 that although the HDR sequence has a relatively high dynamic range, main brightness is distributed between 0 nits and 2000 nits (excluding 2000 nits). Brightness distributed between 0 nits and 1000 nits accounts for 80% to 99%, and brightness distributed between 0 nits and 2000 nits (excluding 2000 nits) accounts for 97% to 99%. Therefore, considering a sensitivity characteristic of the human eye for brightness, from a vision characteristic of the human eye, a range with brightness between 0 nits and 10000 nits is used as a key protected brightness segment of the quantization curve.
(17) A conventional rational quantization function is:
(18)
where p is a preset parameter, L is brightness information of a real world, and F(L) is a quantized value. A quantization curve of a rational quantization function shown in
(19) In addition, a Gamma function is defined in the ITU-R Recommendation BT.1886 standard. The Gamma function is an early optical-electro transfer function. The Gamma function is shown as follows:
L=a(max[(V+b),0]).sup.r, where
(20) L represents image information generated after optical-electro transfer,
(21)
V represents brightness information of a real world,
(22)
and r=2.4.
(23) An image that is displayed on a display device with brightness of 100 nits by using the Gamma function has relatively good quality. However, with an upgrade of the display device, when the brightness of the display device is 600 nits or 2000 nits, an image that is output by using the Gamma function cannot be normally displayed on the display device.
(24) Therefore, with reference to the rational quantization function and the Gamma function, an optical-electro transfer function in this application is proposed in this embodiment of the present disclosure. A Weber score obtained through calculation by using the optical-electro transfer function accords with a distribution characteristic of scenario brightness statistics, making the quantization curve better accord with a characteristic of human eye perception, that is, effectively expanding a dynamic range that meets a Weber score constraint.
(25) A brightness statistics curve shown in
(26) An optical-electro transfer function in conventional scheme 2 uses the conventional Gamma function at a low end and a log curve at a high end. A Hybrid Log-Gamma transfer function is proposed. The Hybrid Log-Gamma function may be shown as follows:
(27)
where
(28) E represents image information generated after optical-electro transfer, E represents light information (normalized light information) of a real world, and a, b, c, and r are preset parameters. A dynamic range in scheme 2 is only between 0 nits and 2000 nits (excluding 2000 nits). A part exceeding 2000 nits is truncated to 2000 nits.
(29) A Weber score shown in
(30) S202. The first terminal device transfers, from RGB space to YCbCr space by using a preset first color space transfer function, the image information generated after optical-electro transfer, to obtain image information generated after space transfer.
(31) S203. The first terminal device performs, in the YCbCr space, floating-point-to-fixed-point conversion on the image information generated after space transfer, to obtain image information generated after floating-point-to-fixed-point conversion.
(32) S204. The first terminal device performs downsampling on the image information generated after floating-point-to-fixed-point conversion, to obtain image information generated after downsampling.
(33) S205. The first terminal device encodes the image information generated after downsampling, to obtain encoded image information.
(34) S206. The first terminal device sends the encoded image information to a second terminal device.
(35) S207. The second terminal device decodes the encoded image information, to obtain decoded image information.
(36) S208. The second terminal device performs upsampling on the decoded image information, to obtain image information generated after upsampling.
(37) S209. The second terminal device performs fixed-point-to-floating-point conversion on the image information generated after upsampling, to obtain image information generated after fixed-point-to-floating-point conversion.
(38) S210. The second terminal device transfers, from the YCbCr space to the RGB space by using a preset second color space transfer function, the image information generated after fixed-point-to-floating-point conversion, to obtain image information generated after space transfer.
(39) S211. The second terminal device performs, by using a preset electro-optical transfer function, electro-optical transfer on the image information generated after space transfer, to obtain an output primary color signal.
(40) S212. The second terminal device outputs the primary color signal.
(41) When a video stream encoding and decoding framework is SMPTE 2084 TF, an original optical-electro transfer module is updated to the optical-electro transfer function in this application. It can be learned through analysis that compared with an original video stream encoding and decoding method, the method for processing image signal conversion in this application saves a bit rate by 18.8% for a peak signal to noise ratio (PSNR), saves a bit rate by 20.3% for a masked peak signal to noise ratio (MPSNR), and saves a bit rate by 9% for Delta-E (E, a test unit of a color difference perceived by the human eye).
(42) In the method for processing image signal conversion shown in
(43) Referring to
(44) S301. A terminal device performs optical-electro transfer on an input primary color signal by using a preset optical-electro transfer function, to obtain image information generated after optical-electro transfer.
(45) The terminal device may perform optical-electro transfer on the input primary color signal by using the preset optical-electro transfer function, to obtain the image information generated after optical-electro transfer. The terminal device may be a smartphone, a camera, a tablet computer, or the like. An image composed of the primary color signal may be collected by the camera or stored locally in advance.
(46) S302. The terminal device transfers, from RGB space to YCbCr space by using a preset first color space transfer function, the image information generated after optical-electro transfer, to obtain image information generated after space transfer.
(47) S303. The terminal device performs, in the YCbCr space, floating-point-to-fixed-point conversion on the image information, to obtain image information generated after floating-point-to-fixed-point conversion.
(48) S304. The terminal device performs downsampling on the image information generated after floating-point-to-fixed-point conversion, to obtain image information generated after downsampling.
(49) S305. The terminal device performs upsampling on the image information generated after downsampling, to obtain image information generated after upsampling.
(50) S306. The terminal device performs fixed-point-to-floating-point conversion on the image information generated after upsampling, to obtain image information generated after fixed-point-to-floating-point conversion.
(51) S307. The terminal device transfers, from the YCbCr space to the RGB space by using a preset second color space transfer function, the image information generated after fixed-point-to-floating-point conversion, to obtain image information generated after space transfer.
(52) S308. The terminal device performs, by using a preset electro-optical transfer function, electro-optical transfer on the image information generated after space transfer, to obtain an output primary color signal.
(53) S309. The terminal device outputs the primary color signal.
(54) In the method for processing image signal conversion shown in
(55) Referring to
(56) The processor 401 may be a central processing unit (CPU), a network processor (NP), or the like.
(57) The memory 402 may be specifically configured to store a primary color signal and the like. The memory 402 may include a volatile memory, for example, a random access memory (RAM); or the memory may include a nonvolatile memory, for example, a read-only memory (ROM), a flash memory, a hard disk drive (HDD), or a solid-state drive (SSD); or the memory may include a combination of the memories of the foregoing types.
(58) The input apparatus 403 is configured to receive an input primary color signal. For example, the input apparatus 403 is a wireless interface or a wired interface.
(59) The output apparatus 404 is configured to output a primary color signal. For example, the output apparatus 404 is a wireless interface or a wired interface.
(60) The processor 401, the input apparatus 403, and the output apparatus 404 invoke a program stored in the memory 402, and may perform the following operations:
(61) the input apparatus 403 is configured to obtain the input primary color signal, where the primary color signal is a numeric value of an optical signal corresponding to an image, and the primary color signal is proportional to light intensity;
(62) the processor 401 is configured to perform, by using a preset optical-electro transfer function, optical-electro transfer on the input primary color signal, to obtain image information generated after optical-electro transfer, where the image information is a numeric expression value of the image, and the conversion processing includes at least the following processing:
(63)
where a, b, m, and p are rational numbers, L is the input primary color signal, and L is the image information generated after conversion processing;
(64) the processor 401 is further configured to transfer, from RGB space to YCbCr space by using a preset color space transfer function, the image information generated after optical-electro transfer, to obtain image information generated after space transfer;
(65) the processor 401 is further configured to perform, in the YCbCr space, floating-point-to-fixed-point conversion on the image information, to obtain image information generated after floating-point-to-fixed-point conversion;
(66) the processor 401 is further configured to perform downsampling on the image information generated after floating-point-to-fixed-point conversion, to obtain image information generated after downsampling;
(67) the processor 401 is further configured to perform upsampling on the image information generated after downsampling, to obtain image information generated after upsampling;
(68) the processor 401 is further configured to perform fixed-point-to-floating-point conversion on the image information generated after upsampling, to obtain image information generated after fixed-point-to-floating-point conversion;
(69) the processor 401 is further configured to transfer, from the YCbCr space to the RGB space by using a preset color space transfer function, the image information generated after fixed-point-to-floating-point conversion, to obtain image information generated after space transfer;
(70) the processor 401 is further configured to perform, by using a preset electro-optical transfer function, electro-optical transfer on the image information generated after color space transfer, to obtain an output primary color signal, where the output primary color signal is a value used by a display device to display a reference optical signal of the image, and the primary color signal is proportional to light intensity; and
(71) the conversion processing includes:
(72)
where a, b, m, and p are rational numbers, L is input image information, and L is the processed output primary color signal; and
(73) the output apparatus 404 is configured to output the primary color signal.
(74) Specifically, the terminal device described in this embodiment of the present disclosure may be configured to implement some or all of the processes in the embodiment that is of the method for processing image signal conversion and that is described with reference to
(75) Referring to
(76) The signal obtaining unit 501 is configured to obtain an input primary color signal, where the primary color signal is a numeric value of an optical signal corresponding to an image, and the primary color signal is proportional to light intensity.
(77) The conversion processing unit 502 is configured to perform, by using an optical-electro transfer function, conversion processing on the primary color signal, to obtain processed image information, where the image information is a numeric expression value of the image, and the conversion processing includes at least the following processing:
(78)
where a, b, m, and p are rational numbers, L is the input primary color signal, and L is the image information generated after conversion processing.
(79) In an optional embodiment, the conversion processing includes at least a scaling parameter a and a bias parameter b, and the scaling parameter and the bias parameter are used to control a shape of a conversion characteristic curve of the conversion processing.
(80) In an optional embodiment, the conversion processing includes at least a scaling parameter a and a bias parameter b, and the scaling parameter a and the bias parameter b meet: a+b=1.
(81) In an optional embodiment, the conversion processing is:
(82)
where a, m, and p are rational numbers, L is the input primary color signal, and L is the image information generated after conversion processing.
(83) In an optional embodiment, that a, b, m, and p are rational numbers includes:
(84) a=1.12672, b=0.12672, m=0.14, and p=2.2; or
(85) a=1.19996, b=0.19996, m=0.11, and p=1.1; or
(86) a=1.17053, b=0.17053, m=0.12, and p=1.4; or
(87) a=1.14698, b=0.14698, m=0.13, and p=1.8; or
(88) a=1.11007, b=0.11007, m=0.15, and p=2.7; or
(89) a=1.12762, b=0.127622, m=0.14, and p=2.3; or
(90) a=1.13014, b=0.13014, m=0.14, and p=2.6; or
(91) a=1.11204, b=0.112042, m=0.15, and p=3; or
(92) a=1.09615, b=0.0961462, m=0.16, and p=3.3.
(93) In an optional embodiment, that a, b, m, and p are rational numbers includes:
(94) a=1.2441, b=0.2441, m=0.1, and p=1.1; or
(95) a=1.20228, b=0.20228, m=0.11, and p=1.2; or
(96) a=1.17529, b=0.17529, m=0.12, and p=1.7; or
(97) a=1.14933, b=0.14933, m=0.13, and p=2; or
(98) a=1.12762, b=0.12762, m=0.14, and p=2.3; or
(99) a=1.11204, b=0.11204, m=0.15, and p=3; or
(100) a=1.09615, b=0.09615, m=0.16, and p=3.3.
(101) In an optional embodiment, the primary color signal is a numeric value of a color component corresponding to specific color space.
(102) In an optional embodiment, the primary color signal is a color component corresponding to specific color space, including at least an R component, a G component, a B component, or a Y component.
(103) In an optional embodiment, the primary color signal is a numeric value of a color component corresponding to specific color space, and the numeric value is expressed in a floating-point number, a half-precision floating-point number, or a fixed-point number. The half-precision floating-point number, for example, is a 16-bit floating-point number, or a half-precision floating-point number defined in IEEE 754.
(104) In an optional embodiment, the conversion processing is computation performed in normalized space [0,1].
(105) In an optional embodiment, the primary color signal is a numeric value of an optical signal corresponding to a photographing scene in a camera, and the image information is a linear numeric expression value used for recording an original optical signal of a scene image in the camera; or the primary color signal is a linear numeric expression value of an original optical signal of the image, and the image information is a non-linear numeric expression value of an image generated after conversion processing; or the primary color signal is a first non-linear numeric expression value of the image, and the image information is a second non-linear numeric expression value of an image generated after conversion processing.
(106) In the apparatus for processing image signal conversion shown in
(107) Referring to
(108) The information obtaining unit 601 is configured to obtain input image information, where the image information is a numeric expression value of an image.
(109) The conversion processing unit 602 is configured to perform, by using an electro-optical transfer function, conversion processing on the image information, to obtain an output primary color signal, where the primary color signal is a value used by a display device to display a reference optical signal of the image, and the primary color signal is proportional to light intensity; and
(110) the conversion processing includes:
(111)
where a, b, m, and p are rational numbers, L is the input image information, and L is the processed output primary color signal.
(112) In an optional embodiment, the conversion processing includes at least a scaling parameter a and a bias parameter b, and the scaling parameter and the bias parameter are used to control a shape of a conversion characteristic curve of the conversion processing.
(113) In an optional embodiment, the conversion processing includes at least a scaling parameter a and a bias parameter b, and the scaling parameter a and the bias parameter b meet: a+b=1.
(114) In an optional embodiment, the conversion processing is:
(115)
where a, m, and p are rational numbers, L is the input image information, and L is the processed output primary color signal.
(116) In an optional embodiment, that a, b, m, and p are rational numbers includes:
(117) a=1.12672, b=0.12672, m=0.14, and p=2.2; or
(118) a=1.19996, b=0.19996, m=0.11, and p=1.1; or
(119) a=1.17053, b=0.17053, m=0.12, and p=1.4; or
(120) a=1.14698, b=0.14698, m=0.13, and p=1.8; or
(121) a=1.11007, b=0.11007, m=0.15, and p=2.7; or
(122) a=1.12762, b=0.127622, m=0.14, and p=2.3; or
(123) a=1.13014, b=0.13014, m=0.14, and p=2.6; or
(124) a=1.11204, b=0.112042, m=0.15, and p=3; or
(125) a=1.09615, b=0.0961462, m=0.16, and p=3.3.
(126) In an optional embodiment, that a, b, m, and p are rational numbers includes:
(127) a=1.2441, b=0.2441, m=0.1, and p=1.1; or
(128) a=1.20228, b=0.20228, m=0.11, and p=1.2; or
(129) a=1.17529, b=0.17529, m=0.12, and p=1.7; or
(130) a=1.14933, b=0.14933, m=0.13, and p=2; or
(131) a=1.12762, b=0.12762, m=0.14, and p=2.3; or
(132) a=1.11204, b=0.11204, m=0.15, and p=3; or
(133) a=1.09615, b=0.09615, m=0.16, and p=3.3.
(134) In an optional embodiment, the primary color signal is a numeric value of a color component corresponding to specific color space.
(135) In an optional embodiment, a color component, of the primary color signal, corresponding to specific color space includes at least an R component, a G component, a B component, or a Y component.
(136) In an optional embodiment, the processed output primary color signal is a numeric value of a color component corresponding to specific color space, and the numeric value is expressed in a floating-point number, a half-precision floating-point number, or a fixed-point number. The half-precision floating-point number, for example, is a 16-bit floating-point number, or a half-precision floating-point number defined in IEEE 754.
(137) In an optional embodiment, the conversion processing is computation performed in normalized space [0,1].
(138) In an optional embodiment, the image information is a non-linear numeric expression value that is used to display the image and that is input to a display terminal device, and the primary color signal is a numeric value of a corresponding optical signal in the display terminal device; or the image information is a non-linear numeric expression value of the input image, and the primary color signal is a linear numeric expression value; or the image information is a first non-linear numeric expression value of an image generated after conversion processing, and the primary color signal is a second non-linear numeric expression value of the image.
(139) In the apparatus for processing image signal conversion shown in
(140) In descriptions in this specification, descriptions about such reference terms as an embodiment, some embodiments, an example, a specific example, and some examples mean that specific features, structures, materials, or characteristics described with reference to the embodiments or examples are included in at least one embodiment or example of the present disclosure. In the specification, the foregoing example expressions of the terms are not necessarily with respect to a same embodiment or example. In addition, the described specific features, structures, materials, or characteristics may be combined in a proper manner in any one or more of the embodiments or examples. In addition, a person skilled in the art may integrate or combine different embodiments or examples and characteristics of different embodiments or examples described in this specification, as long as they do not conflict with each other.
(141) In addition, the terms first and second are merely intended for a purpose of description, and shall not be understood as an indication or implication of relative importance or an implicit indication of a quantity of indicated technical features. Therefore, a feature limited by first or second may explicitly or implicitly include at least one of the features. In the descriptions about the present disclosure, a plurality of means at least two, for example, two or three, unless otherwise specifically limited.
(142) Logic and/or steps shown in the flowcharts or described herein in other manners, for example, may be considered as a program list of executable instructions that are used to implement logical functions, and may be specifically implemented on any computer-readable medium, for an instruction execution system, apparatus, or device (for example, a computer-based system, a system including a processor, or another system that can fetch instructions from the instruction execution system, apparatus, or device and execute the instructions) to use, or for a combination of the instruction execution system, apparatus, or device to use. In terms of this specification, the computer-readable medium may be any apparatus that may include, store, communicate, propagate, or transmit programs, for the instruction execution system, apparatus, or device to use, or for a combination of the instruction execution system, apparatus, or device to use. More specific examples (this list is not exhaustive) of the computer-readable medium include the following: an electrical connection part (an electronic apparatus) with one or more buses, a portable computer cartridge (a magnetic apparatus), a random access memory, a read-only memory, an erasable programmable read-only memory, an optical fiber apparatus, and a portable compact disc read-only memory. In addition, the computer-readable medium may even be a piece of paper on which the programs can be printed or another appropriate medium. Because, for example, optical scanning may be performed on the paper or the another medium, then processing, such as edition, decoding, or another appropriate means when necessary, may be performed to obtain the programs in an electronic manner, and then the programs are stored in a computer memory.
(143) It should be understood that parts in the present disclosure may be implemented by using hardware, software, firmware, or a combination thereof. In the foregoing implementations, a plurality of steps or methods may be implemented by using software or firmware that is stored in a memory and is executed by an appropriate instruction execution system. For example, if hardware is used for implementation, being similar to implementation in another implementation, any item or a combination of the following well-known technologies in the art may be used for implementation: a discrete logic circuit having a logic gate circuit that is configured to implement a logical function for a data signal, an application-specific integrated circuit having an appropriate combinatorial logic gate circuit, a programmable gate array, a field programmable gate array, and the like.
(144) In addition, the modules in the embodiments of the present disclosure may be implemented in a form of hardware, or may be implemented in a form of a software functional module. If an integrated module is implemented in the form of a software functional module and sold or used as an independent product, the integrated module may be stored in a computer-readable storage medium.
(145) Although the embodiments of the present disclosure are shown and described above, it can be understood that the foregoing embodiments are examples, and cannot be construed as a limitation to the present disclosure. Within the scope of the present disclosure, a person of ordinary skill in the art may make changes, modifications, replacement, and variations to the foregoing embodiments.