Method and device for tone-mapping a picture by using a parametric tone-adjustment function
11182882 · 2021-11-23
Assignee
Inventors
- Fabrice Leleannec (Mouazè, FR)
- Sebastien Lasserre (Thorignè Fouillard, FR)
- Tangi Poirier (Rennes, FR)
- Edouard Francois (Bourg des Comptes, FR)
Cpc classification
International classification
H04N1/407
ELECTRICITY
Abstract
The present principles relates to a method and device for tone-mapping an input picture by using a parametric tone-adjustment function. The method is characterized in that the method comprises determining at least one parameter of said tone-adjustment function modulated by a brightness level of the input picture.
Claims
1. A method for tone-mapping an input picture by using a parametric tone-adjustment function described over multiple ranges, at least one parameter of the tone-adjustment function being associated with each range, wherein the method comprises: determining a non-linear tone adjustment function over at least one range according to a modulation value responsive to the brightness level; and determining a parameter of the tone-adjustment function according to the modulation value, wherein the modulation value is responsive to a mid-tone level of the input picture determined from a black level and a white level, wherein the black level and the white level are based on content of the input picture.
2. The method of claim 1, wherein the method further comprises determining another parameter of the tone-adjustment function according to an increasing concave function taking into account a brightness level of the input picture.
3. The method of claim 1, wherein the tone adjustment function is linear over a first range, parabolic over a second range and linear over a third range.
4. The method of claim 3, wherein the tone adjustment function is described: over the first range by:
y=SGC*x where SGC is a real parameter value, y is an output value, and x is an input value; over the second range by:
ax.sup.2+bx+c,x.sub.SGC<x<x.sub.HGC over the third range by:
y=HGC*x+(1−HGC) where HGC is a slope; and where
5. The method of claim 4, wherein the real value highlightGainControl is a constant value.
6. The method of claim 4, wherein the real value highlightGainControl is kept as another parameter in addition to the modulation value.
7. The method of claim 4, wherein the real value highlightGainControl is given by:
highlightGainControl=2−2*exp(−0.26/Ba).
8. The method of claim 4, wherein the modulation value is responsive to a mid-tone level of the input picture.
9. The method of claim 8, wherein the mid-tone level is determined based on a black level and a white level.
10. The method of claim 9, wherein the mid-tone level is determined as one of a geometric mean and a logarithm mean of the black level and the white level.
11. The method of claim 4, wherein the input image is one of a plurality of pictures included in a video sequence, and wherein the determining of the modulation value is performed for each of the plurality of pictures, wherein the modulation values for the plurality of pictures are temporally smoothed.
12. The method of claim 3, wherein the tone adjustment function is described: over the first range by:
13. The method of claim 12, wherein the real value highlightGainControl is a constant value.
14. The method of claim 12, wherein the real value highlightGainControl is kept as another parameter in addition to the modulation value.
15. The method of claim 12, wherein the real value highlightGainControl is given by:
highlightGainControl=2−2*exp(−0.26/Ba).
16. The method of claim 1, wherein the tone-adjustment function is a parametric logarithm function described by:
17. A method for encoding an input picture comprising: obtaining a maximum value from component values of the input picture; obtaining a linear value by tone-mapping the maximum value according to the method of claim 1; multiplying the input picture by a ratio of the linear value over the maximum value.
18. A method for inverse-tone-mapping a tone-mapped version of an input picture by using a parametric inverse-tone-adjustment function described over multiple ranges, at least one parameter of the tone-adjustment function being associated with each range, wherein the method comprises: determining a non-linear tone adjustment function over at least one range according to a modulation value responsive to the brightness level, and obtaining a parameter of the inverse-tone-adjustment function according to the modulation value, wherein the modulation value is responsive to a mid-tone level of the input picture determined from a black level and a white level, wherein the black level and the white level are based on content of the input picture.
19. A method for decoding a picture comprising: obtaining a maximum value from the component values of a decoded picture; obtaining a non-linear value by inverse-tone-mapping the maximum value according to the method of claim 18; multiplying the decoded picture by a ratio of the non-linear value over the maximum value.
20. A device for tone-mapping an input picture by using a parametric tone-adjustment function described over multiple ranges, at least one parameter of the tone-adjustment function being associated with each range, wherein the device comprises: a processor configured to determine a non-linear tone adjustment function over at least one range according to a modulation value responsive to the brightness level, and to determine a parameter of the tone-adjustment function according to the modulation value, wherein the modulation value is responsive to a mid-tone level of the input picture determined from a black level and a white level, wherein the black level and the white level are based on content of the input picture.
21. A device for encoding an input picture comprising a processor configured to: obtain a maximum value from the component values of the input picture; obtain a linear value by tone-mapping the maximum value by: determining a non-linear tone adjustment function over at least one range of a plurality of ranges of a tone-adjustment function according to a modulation value responsive to the brightness level, wherein at least one parameter of the tone-adjustment function is associated with each range of the plurality of ranges; and determining a parameter of the tone-adjustment function according to the modulation value, wherein the modulation value is responsive to a mid-tone level of the input picture determined from a black level and a white level, wherein the black level and the white level are based on content of the input picture; multiply the input picture by a ratio of the linear value over the maximum value.
22. A device for decoding a picture comprising a processor configured to: obtain a maximum value from the component values of a decoded picture; obtain a non-linear value by inverse-tone-mapping the maximum value by: determining a non-linear tone adjustment function over at least one range of a plurality of ranges of an inverse-tone-adjustment function according to a modulation value responsive to the brightness level, wherein at least one parameter of the inverse-tone-adjustment function is associated with each range of the plurality of ranges; and obtaining a parameter of the inverse-tone-adjustment function according to the modulation value, wherein the modulation value is responsive to a mid-tone level of the input picture determined from a black level and a white level, wherein the black level and the white level are based on content of the picture; multiply the decoded picture by a ratio of the non-linear value over the maximum value.
23. A non-transitory computer-readable storage medium comprising instructions that cause a computing device to perform: determining a non-linear tone adjustment function over at least one range of a plurality of ranges of a tone-adjustment function according to a modulation value responsive to the brightness level, wherein at least one parameter of the tone-adjustment function is associated with each range of the plurality of ranges; determining a parameter of the tone-adjustment function according to the modulation value, wherein the modulation value is responsive to a mid-tone level of the input picture determined from a black level and a white level, wherein the black level and the white level are based on content of an input picture; and tone-mapping the input picture by applying the parametric tone-adjustment function.
24. A device for inverse-tone-mapping a tone-mapped version of an input picture by using a parametric inverse-tone-adjustment function described over multiple ranges, wherein at least one parameter of the inverse-tone-adjustment function is associated with each range, wherein the device comprises a processor configured to determine the inverse-tone-adjustment function as an inverse of a non-linear parametric tone-adjustment function described over at least one range according to a modulation value, wherein a parameter of the non-linear parametric tone-adjustment function is determined according to a modulation value, wherein the modulation value is derived from a mid-tone level of the input picture determined from a black level and a white level, wherein the black level and the white level are based on content of the input picture.
Description
4. BRIEF DESCRIPTION OF DRAWINGS
(1) In the drawings, examples of the present principles are illustrated. It shows:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15) Similar or same elements are referenced with the same reference numbers.
6. DESCRIPTION OF EXAMPLE OF THE PRESENT PRINCIPLES
(16) The present principles will be described more fully hereinafter with reference to the accompanying figures, in which examples of the present principles are shown. The present principles may, however, be embodied in many alternate forms and should not be construed as limited to the examples set forth herein. Accordingly, while the present principles are susceptible to various modifications and alternative forms, specific examples thereof are shown by way of examples in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the present principles to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present principles as defined by the claims.
(17) The terminology used herein is for the purpose of describing particular examples only and is not intended to be limiting of the present principles. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,” “includes” and/or “including” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Moreover, when an element is referred to as being “responsive” or “connected” to another element, it can be directly responsive or connected to the other element, or intervening elements may be present. In contrast, when an element is referred to as being “directly responsive” or “directly connected” to other element, there are no intervening elements present. As used herein the term “and/or” includes any and all combinations of one or more of the associated listed items and may be abbreviated as“/”.
(18) It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element without departing from the teachings of the present principles.
(19) Although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
(20) Some examples are described with regard to block diagrams and operational flowcharts in which each block represents a circuit element, module, or portion of code which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in other implementations, the function(s) noted in the blocks may occur out of the order noted. For example, two blocks shown in succession may, in fact, be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending on the functionality involved.
(21) Reference herein to “in accordance with an example” or “in an example” means that a particular feature, structure, or characteristic described in connection with the example can be included in at least one implementation of the present principles. The appearances of the phrase in accordance with an example” or “in an example” in various places in the specification are not necessarily all referring to the same example, nor are separate or alternative examples necessarily mutually exclusive of other examples.
(22) Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.
(23) While not explicitly described, the present examples and variants may be employed in any combination or sub-combination.
(24) In accordance with the present principles, illustrated in
(25) In accordance with a first example of the present principles, illustrated
(26)
(27) The input parameter highlightGainControl equals a constant value CST, for example CST=2, that do not depends on the modulation value Ba.
(28) In accordance with a variant of the first example, the input parameter highlightGainControl is kept as another parameter in addition to the modulation value Ba.
(29) In accordance with a variant of the first example, the input parameter highlightGainControl depends on the modulation value Ba as follows:
highlightGainControl=2−2*exp(−0.26/Ba
(30) In this first example and its variants, the modulation value Ba is expressed in nits and typically lies in the interval [1, 40].
(31) According to this first example and its variants, the parameters SGC, HGC and MTA are then obtained from these three input parameters according to equation (4).
(32) In accordance with a second example of the present principles, illustrated in
(33) Over the bottom interval, the tone adjustment function is linear:
(34)
with SGC a real parameter value, y an output value and x an input value.
(35) Over the upper interval, the tone adjustment function is also linear:
(36)
with HGC a slope (real parameter value).
(37) Over the mid-interval, the tone adjustment function is a parabola
(38)
connecting the two linear sections in a smooth cross-over. The width of the cross-over is determined by MTA, a real parameter value.
(39) In brief, the tone adjustment function TAF is defined by:
(40)
(41) In accordance with a variant of this second example, the parameters SGC′, HGC′ and MTA′ parameter values depend on two input parameter values midTonesAdjustement and highlightGainControl as follows:
(42)
(43) For example, in the above equations, L.sub.target is equal to 100 nits, L.sub.source is preferably equal to 4000 nits or 5000 nits and v the identity function.
(44) In accordance with a variant of this second example, the input parameters midTonesAdjustement is determined from the modulation value Ba as follows:
(45)
(46) The input parameter highlightGainControl equals a constant value CST, for example CST=2.
(47) In accordance with a variant of the second example, the input parameter highlightGainControl is kept as another parameter in addition to the modulation value Ba.
(48) In accordance with a variant of the second example, the input parameter highlightGainControl depends on the modulation value Ba as follows:
highlightGainControl=2'2*exp(−0.26/Ba)
(49) In this second example and its variants, the modulation value Ba is expressed in nits and lies preferably in the interval [1, 40].
(50) In accordance with third example of the present principles, the tone adjustment function TAF is a parametric logarithm defined by:
(51)
where k is a constant value, for example equal to 1000 and PeakL the peak luminance value of the input picture.
(52) In according with an example of the present principles, the modulation value Ba is obtained as follows: The pixels of the input picture are classified into a histogram depending on their linear luminance Y, as shown in an exemplary histogram in
M:=√{square root over (BW)} (13)
(53) As a consequence, the three levels W, B and M depend on the content of the input picture.
(54) In general, the choice of the modulation value Ba and the tone adjustment function should preserve information at the very dark level and also preserve details in the mid-tone range (i.e., the neighborhood of the mid-tone value). Thus, two conditions are used when deriving the modulation value Ba: (1) the blacks are not clipped down to zero too aggressively; and (2) the number of codewords in the SDR picture used to represent the mid-tone range of the input image is maximized.
(55) Considering the first condition that blacks should not be clipped down to zero too aggressively, a lower bound for the black level is set, i.e.,
π.sub.Ba(B)≥ε (14)
where π.sub.Ba(.) is the tone-adjustment function, B a luminance value and ε is a parameter value.
(56) In
(57) For the second condition (the number of codewords used to encode the mid-tone range should be maximized), the slope for the tone-adjustment function π.sub.Ba at the mid-tone level M is maximized as shown in
(58) Combining both conditions, the modulation value Ba can be uniquely determined by solving the following maximization problem:
(59)
(60) To solve the optimization problem (15) given a tone-adjustment function, a systematic brute-force search of Ba in the range of acceptable Ba values may be performed to compute the modulation value Ba for each picture of a video for example. The tone-adjustment function can be the one illustrated in
(61) In other examples of the present principles, different methods can be used to determine the modulation value, for example, but not limited to, using an average, median, minimum or maximum value of the luminance of the picture I. These operations may be performed in the linear luminance domain or in a non-linear domain such as ln(l) or l.sup.γ with γ<1.
(62) To further improve the tone-adjustment functions (curves), some bounds can be imposed on the modulation value Ba in order to avoid over-shooting in both very dark frames (Ba too low) and very bright frames (Ba too high). For example, the modulation value Ba is set to
Ba.sub.att=Clip.sub.[Ba.sub.
to determine an attenuated modulation value Ba.sub.att, with visually determined values Ba.sub.min=2 nits, Ba.sub.max=50 nits, Ba.sub.mid=5 nits and the attenuation factor σ=0.5. This may provide modulation values closer to visually optimal ones.
(63) In a video sequence, a modulation value Ba may be determined for each picture of said video sequence. In order to avoid temporal inconsistency in scenes with rapid brightness changes, a temporal stabilization is desirable. Fast changing videos, like a scene showing an explosion or fireworks, may cause modulation values to vary rapidly from picture to picture and cause annoying illumination effects. For instance, assuming a scene with a static background and a sudden flash (like an explosion or fireworks) in the foreground may provide annoying visual artefacts. In such cases, due to the flash, the white W will increase, as well as M and then the modulation value Ba. When the modulation value Ba is high, the tone-adjustment function suppress the dark more, and this may induce an unexpected sudden darkening of the background in the tone-adjusted picture. To prevent such an unnatural and annoying temporal variation of the luminosity in the tone-adjusted video sequence, temporal stabilization is proposed to smooth the overall luminosity variation of the video sequence.
(64) In accordance with an example of the present principles, an exponential stabilization is used.
(65) Let Ba.sup.n be the modulation determined at picture n, and Ba.sup.t,n the modulation value after temporal stabilization. The exponential temporal stabilization is then given by:
Ba.sup.t,n=λBa.sup.n+(1−λ)Ba.sup.t,n−1 (17)
with λ a real value adapted to the picture rate.
(66) Other temporal smoothing filters may be used for temporal stabilizing the modulation values.
(67) In accordance with an example of the principles, the method further comprises transmitting the modulation value Ba and possibly the input parameter highlightGainControl in a bitstream.
(68) For example, the modulation value Ba and possibly the input parameter highlightGainControl is (are) encoded as a metadata contained in an adhoc SEI message [Technicolor's response to CfE for HDR and WCG (category 1)—Single layer HDR video coding with SDR backward compatibility, ISO/IEC JTC1/SC29/WG11 MPEG2014/M36263June 2015, Warsaw, Poland].
(69) In accordance with the present principles, illustrated in
(70) Said method for inverse-tone-mapping comprises obtaining a modulation value Ba responsive to a brightness level of the input picture I, and obtaining at least one parameter of the inverse-tone-adjustment function ITAF responsive to said modulation value Ba.
(71) In according with an example of the present principle, when the tone adjustment function TAF is defined by equation (1) and its parameters by equations (2), (3) and (4) (first example or one of its variants), the inverse-tone-adjustment function ITAF is obtained as the reciprocal function of the tone adjustment function depicted in equation (1) as follows:
(72)
(73) and where SGC, HGC are given by equation (4).
(74) In according with an example of the present principle, when the tone adjustment function TAF is defined by equation (6) (second example) and its parameters by equations (7), (8) and (9), the inverse-tone-adjustment function ITAF is obtained as the reciprocal function of the tone adjustment function depicted in equation (8) as follows:
(75)
(76) and where SGC′, HGC′ are given by equation (9).
(77) In according with an example of the present principles, when the tone adjustment function is defined by equation (7) (third example), the inverse-tone-adjustment function ITAF is defined as follows:
ba.sub.N=Ba/PeakL
N=log(1+1/(k.ba_N))
InverseToneAdjustment(x)=k.ba.sub.N.(e.sup.x.N−1),∀x∈[0,1]
(78) The tone-adjustment function (and inverse-tone-adjustment function) may be used in any encoding/decoding scheme that requires a tone-mapping (inverse-tone-mapping) processing.
(79) For example, The HDR/WCG encoding/decoding scheme as requires by the Mpeg HDR/WCG Call For Evidence (Call for Evidence (CfE) for HDR and WCG Video Coding, ISO/IEC JTC1/SC29/WG11 MPEG2014/N15083, February 2015, Geneva, Switzerland) defines two types of requirements. First type is HDR compression performance, i.e. HDR visual quality as a function of the coded bitrate. Second type of requirement is backward compatibility, i.e. the ability to provide a compressed video bit-stream that can be decoded and displayed with existing legacy equipment, while provided an SDR (Standard Dynamic Range) picture (video) that is viewable as is, without any further processing.
(80)
(81) The method comprises a dynamic conversion step 100, which transforms the input linear HDR (R.sub.HDR,G.sub.HDR,B.sub.HDR) picture, into a SDR linear RGB picture, noted (R.sub.SDR,G.sub.SDR,B.sub.SDR). The input HDR picture is linear light, with a peak luminance preferably given by Mastering Display Max Lum metadata (SMPTE ST 2086). The output (R.sub.SDR,G.sub.SDR,B.sub.SDR) linear light picture is SDR, hence has a peak luminance equal to 100 nits. Next steps consist in gamma correcting (step 110), conforming to inverse EOTF of recommendation 1886, color space transforming (step 120), which provides Y′CbCr output picture which is encoded into a coded video stream F (step 130).
(82)
(83) The method comprises decoding a coded video stream F to obtain a decoded Y′CbCr picture (step 200), an inverse color space transforming (step 210), which provides R′G′B′ picture from a decoded Y′CbCr picture, an inverse gamma correcting (step 220) to obtain a SDR linear RGB picture from the R′G′B′ picture, and an inverse dynamic reduction (step 230), which transforms the SDR linear RGB picture, noted (R.sub.SDR,G.sub.SDR,B.sub.SDR) into the input HDR picture, notes (R.sub.HDR,G.sub.HDR,B.sub.HDR).
(84) The steps 210, 220 and 230 execute reciprocal operations of the steps 120, 110 and 100 respectively.
(85)
(86) As can be seen, the step 100 consists, for each pixel p of the input picture, in computing (step 300) a maximum value T(p) from the component values of each pixel of the input HDR picture (α. R.sub.HDR, β. G.sub.HDR, γ. B.sub.HDR, γ. Y.sub.HDR) where (α, β, γ, δ) are fixed weighting factors, typically all equal to 1, obtaining (step 310) a linear value t(p) by tone mapping said maximum value T(p), and multiplying (step 320) the component values of each pixel p of the input HDR picture by a ratio
(87)
between the linear value t(p) over the maximum value T(p).
(88) Then the ratio is computed and commonly applied as a multiplicative tone mapping factor onto input (R.sub.HDR, G.sub.HDR, B.sub.HDR) component values, to provide linear light tone mapped RGB values:
(89)
(90)
(91) As can be seen, the step 230 consists in obtaining computing (step 400) a maximum value t(p) from the component values of each pixel of the SDR linear RGB picture, noted (R.sub.SDR,G.sub.SDR,B.sub.SDR), obtaining (step 410) a non-linear value T(p) by inverse-tone mapping the maximum value t(p), and multiplying (step 420) the component values of each pixel of the decoded Y′CbCr picture by a ratio
(92)
between the non-linear value T(p) over the maximum value t(p).
(93)
(94) In step 500, a non-linear value nt(p) is obtained by transferring the linear value t(p) into a non-linear perceptual domain. To do that, a perceptual curve is used, for instance inverse EOTF of recommendation SMPTE 2084 (High Dynamic Range Electro-Optical Transfer Function of Mastering Reference ST 2084:2014), or the transfer function proposed by Philips in (Philips HDR proposal Dynamic range conversion in relation to the “Unified Model” as discussed in SMPTE DG “Dynamic Metadata for Color Transforms of HDR and WCG Images” R. Nijland Koninklijke Philips N.V. 20150421).
(95) In step 520, the non-linear value nt(p) is reshaped so as to adapt it to the desired brightness or darkness level of the output picture (R.sub.SDR,G.sub.SDR,B.sub.SDR). This is the so-called “Local Slope Adjustment” process which allows producing an output SDR tone mapped picture with a desired level of brightness for backward compatibility matters. In other words, the higher the curve, the brighter the produced SDR picture. Moreover, the SDR picture gets darker when the Local Slope Adjustment curve gets closer to the Identity function (∀x∈[0,1], f(x)=x).
(96) According to the present principles, the “Local Slope Adjustment” process uses the tone-mapping as described in relation with
(97) In step 530, the linear value t(p) is obtained by applying the inverse of the perceptual transfer function to the non-linear value nt(p). The inverse of the perceptual transfer function is configured with an output peak luminance equal to 100 nits.
(98) The decoder 200 is configured to decode data which have been encoded by the encoder 130. The encoder 130 (and decoder 200) may be block-based processing.
(99) The encoder 130 (and decoder 200) is not limited to a specific encoder/decoder which may be, for example, an image/video coder with loss like JPEG, JPEG2000, MPEG2, HEVC recommendation (“High Efficiency Video Coding”, SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Recommendation ITU-T H.265, Telecommunication Standardization Sector of ITU, April 2013) or H264/AVC recommendation (“Advanced video coding for generic audiovisual Services”, SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Recommendation ITU-T H.264, Telecommunication Standardization Sector of ITU, February 2014)).
(100) On
(101)
(102) Device 1200 comprises following elements that are linked together by a data and address bus 1201: a microprocessor 1202 (or CPU), which is, for example, a DSP (or Digital Signal Processor); a ROM (or Read Only Memory) 1203; a RAM (or Random Access Memory) 1204; an I/O interface 1205 for reception of data to transmit, from an application; and a battery 1206
(103) In accordance with an example, the battery 1206 is external to the device. In each of mentioned memory, the word «register» used in the specification can correspond to area of small capacity (some bits) or to very large area (e.g. a whole program or large amount of received or decoded data). The ROM 1203 comprises at least a program and parameters. The ROM 1203 may store algorithms and instructions to perform techniques in accordance with present principles. When switched on, the CPU 1202 uploads the program in the RAM and executes the corresponding instructions.
(104) RAM 1204 comprises, in a register, the program executed by the CPU 1202 and uploaded after switch on of the device 1200, input data in a register, intermediate data in different states of the method in a register, and other variables used for the execution of the method in a register.
(105) The implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between end-users.
(106) In accordance with an example of encoding or an encoder, the picture I is obtained from a source. For example, the source belongs to a set comprising: a local memory (1203 or 1204), e.g. a video memory or a RAM (or Random Access Memory), a flash memory, a ROM (or Read Only Memory), a hard disk; a storage interface (1205), e.g. an interface with a mass storage, a RAM, a flash memory, a ROM, an optical disc or a magnetic support; a communication interface (1205), e.g. a wireline interface (for example a bus interface, a wide area network interface, a local area network interface) or a wireless interface (such as a IEEE 802.11 interface or a Bluetooth® interface); and an picture capturing circuit (e.g. a sensor such as, for example, a CCD (or Charge-Coupled Device) or CMOS (or Complementary Metal-Oxide-Semiconductor)).
(107) In accordance with an example of the decoding or a decoder, the decoded picture Î is sent to a destination; specifically, the destination belongs to a set comprising: a local memory (1203 or 1204), e.g. a video memory or a RAM, a flash memory, a hard disk; a storage interface (1205), e.g. an interface with a mass storage, a RAM, a flash memory, a ROM, an optical disc or a magnetic support; a communication interface (1205), e.g. a wireline interface (for example a bus interface (e.g. USB (or Universal Serial Bus)), a wide area network interface, a local area network interface, a HDMI (High Definition Multimedia Interface) interface) or a wireless interface (such as a IEEE 802.11 interface, WiFi® or a Bluetooth® interface); and a display.
(108) In accordance with examples of encoding or encoder, the coded video stream F is sent to a destination. As an example, the stream F is stored in a local or remote memory, e.g. a video memory (1204) or a RAM (1204), a hard disk (1203). In a variant, one or both bitstreams are sent to a storage interface (1205), e.g. an interface with a mass storage, a flash memory, ROM, an optical disc or a magnetic support and/or transmitted over a communication interface (1205), e.g. an interface to a point to point link, a communication bus, a point to multipoint link or a broadcast network.
(109) In accordance with examples of decoding or decoder, the stream F is obtained from a source. Exemplarily, the stream is read from a local memory, e.g. a video memory (1204), a RAM (1204), a ROM (1203), a flash memory (1203) or a hard disk (1203). In a variant, the bitstream is received from a storage interface (1205), e.g. an interface with a mass storage, a RAM, a ROM, a flash memory, an optical disc or a magnetic support and/or received from a communication interface (1205), e.g. an interface to a point to point link, a bus, a point to multipoint link or a broadcast network.
(110) In accordance with examples, device 1200 being configured to implement a method described in relation with
(111) In accordance with examples, device 1200 being configured to implement an encoding method described in relation with
(112) In accordance with examples, device 60 being configured to implement a decoding method described in relation with
(113) According to an example of the present principles, illustrated in
(114) In accordance with an example, the network is a broadcast network, adapted to broadcast still pictures or video pictures from device A to decoding devices including the device B.
(115) A signal, intended to be transmitted by the device A, carries the stream F. The stream F comprises a modulation value Ba responsive to a brightness level of the input picture and intended to be used to determine a parameter of an inverse-tone-mapping method as explained before. Optionally, The stream F further comprises the input parameter highlightGainControl.
(116)
(117) Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications. Examples of such equipment include an encoder, a decoder, a post-processor processing output from a decoder, a pre-processor providing input to an encoder, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a cell phone, a PDA, and any other device for processing a picture or a video or other communication devices. As should be clear, the equipment may be mobile and even installed in a mobile vehicle.
(118) Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a computer readable storage medium. A computer readable storage medium can take the form of a computer readable program product embodied in one or more computer readable medium(s) and having computer readable program code embodied thereon that is executable by a computer. A computer readable storage medium as used herein is considered a non-transitory storage medium given the inherent capability to store the information therein as well as the inherent capability to provide retrieval of the information therefrom. A computer readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. It is to be appreciated that the following, while providing more specific examples of computer readable storage mediums to which the present principles can be applied, is merely an illustrative and not exhaustive listing as is readily appreciated by one of ordinary skill in the art: a portable computer diskette; a hard disk; a read-only memory (ROM); an erasable programmable read-only memory (EPROM or Flash memory); a portable compact disc read-only memory (CD-ROM); an optical storage device; a magnetic storage device; or any suitable combination of the foregoing.
(119) The instructions may form an application program tangibly embodied on a processor-readable medium.
(120) Instructions may be, for example, in hardware, firmware, software, or a combination. Instructions may be found in, for example, an operating system, a separate application, or a combination of the two. A processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
(121) As will be evident to one of skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry as data the rules for writing or reading the syntax of a described example of the present principles, or to carry as data the actual syntax-values written by a described example of the present principles. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on a processor-readable medium.
(122) A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations are contemplated by this application.