Training method, image encoding method, image decoding method and apparatuses thereof
11330264 · 2022-05-10
Assignee
Inventors
- Jing Zhou (Beijing, CN)
- Akira Nakagawa (Kawasaki, JP)
- Sihan Wen (Beijing, CN)
- Zhiming Tan (Beijing, CN)
Cpc classification
H04N19/184
ELECTRICITY
International classification
H04B1/66
ELECTRICITY
H04N19/184
ELECTRICITY
Abstract
Embodiments of this disclosure provide a training method, an image encoding method, an image decoding method and apparatuses thereof. The image encoding apparatus includes: an image encoder configured to encode input image data to obtain a latent variable; a quantizer configured to perform quantizing processing on the latent variable according to a quantization step to generate a quantized latent variable; and an entropy encoder configured to perform entropy coding on the quantized latent variable by using an entropy model to form a bit stream.
Claims
1. A training device for an image processing apparatus, in which an image encoder and an image decoder are trained by using a training image, the training device comprises: a memory to store a plurality of instructions; and a processor coupled to the memory and configured to: acquire a latent variable obtained by the image encoder by encoding input training image data; acquire first restored image data obtained by the image decoder by decoding the latent variable and second restored image data obtained by the image decoder by decoding a sum of the latent variable and a noise; and train the image encoder and the image decoder according to a cost function, the cost function being related to a deviation between the input training image data and the first restored image data and a deviation between the first restored image data and the second restored image data.
2. An image encoding apparatus, comprising: an image encoder configured to encode input image data to obtain a latent variable, the image encoder encoding the input image data according to training by the training device as claimed in claim 1; a quantizer configured to perform quantizing processing on the latent variable according to a quantization operation to generate a quantized latent variable; and an entropy encoder configured to perform entropy coding on the quantized latent variable by using an entropy model to form a bit stream.
3. The image encoding apparatus according to claim 2, wherein the image encoding apparatus further comprises: a quantization adjuster configured to adjust the quantization operation to adjust a bit rate of the bit stream.
4. The image encoding apparatus according to claim 2, wherein, the quantizing processing of the quantizer is non-uniform quantizing processing.
5. The image encoding apparatus according to claim 4, wherein, the non-uniform quantizing processing comprises: taking a latent variable to which a probability distribution peak value of the latent variable corresponds as a zero point, a latent variable of a first range containing the zero point corresponding to a first quantized latent variable; and for other quantized latent variables than the first quantized latent variables, the other quantized latent variables corresponding to latent variables of a second range, the second range being less than the first range.
6. The image encoding apparatus according to claim 5, wherein, the probability distribution peak value of the latent variable is obtained based on the entropy model.
7. An image decoding apparatus, comprising: an entropy decoder configured to perform entropy decoding on a bit stream by using an entropy model to form a quantized latent variable; a de-quantizer configured to perform de-quantizing processing on the quantized latent variable according to a quantization operation to generate a reconstructed latent variable; and an image decoder configured to perform decoding processing on the reconstructed latent variable to obtain restored image data, the image decoder performing the decoding processing according to training by the training device as claimed in claim 1.
8. The image decoding apparatus according to claim 7, wherein, the de-quantizer performs the de-quantizing processing according to the quantization operation.
9. The image decoding apparatus according to claim 7, wherein the image decoding apparatus further comprises: a quantization adjuster configured to adjust the quantization operation.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Elements and features depicted in one drawing or embodiment of the disclosure may be combined with elements and features depicted in one or more additional drawings or embodiments. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views and may be used to designate like or similar parts in more than one embodiments.
(2) In the drawings:
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION
(11) These and further aspects and features of this disclosure will be apparent with reference to the following description and attached drawings. Implementations are illustrative only, and are not intended to limit this disclosure. These implementations of the embodiments of this disclosure shall be described below with reference to the accompanying drawings.
(12) In the embodiments of this disclosure, terms “first”, and “second”, etc., are used to differentiate different elements with respect to names, and do not indicate spatial arrangement or temporal orders of these elements, and these elements should not be limited by these terms. Terms “and/or” include any one and all combinations of one or more relevantly listed terms. Terms “contain”, “include” and “have” refer to existence of stated features, elements, components, or assemblies, but do not exclude existence or addition of one or more other features, elements, components, or assemblies.
(13) In the embodiments of this disclosure, single forms “a”, and “the”, etc., include plural forms, and should be understood as “a kind of” or “a type of” in a broad sense, but should not defined as a meaning of “one”; and the term “the” should be understood as including both a single form and a plural form, except specified otherwise. Furthermore, the term “according to” should be understood as “at least partially according to”, the term “based on” should be understood as “at least partially based on”, except specified otherwise.
(14) After obtaining the network of the encoder by training based on the loss function (R+λ*D), the bit rate and level of distortion of the image may be determined.
(15) It was found by the inventors that if the bit rate needs to be adjusted, a value of λ is usually modified for multiple times, and corresponding to each value of λ, it is needed to retrain the network of the encoder and determine a network of the encoder with a bit rate closest to a needed bit rate, and a method for adjusting a bit rate is relatively cumbersome.
(16) Embodiments of this disclosure provide a training method, an image encoding method, an image decoding method and apparatuses thereof, wherein an image encoder obtained according to the training method is able to expediently achieve adjustment of different bit rates.
(17) An advantage of the embodiments of this disclosure exists in that the image encoder obtained according to the training method is able to expediently achieve adjustment of different bit rates.
(18) Embodiment of the First Aspect
(19) Embodiment of the first aspect of this disclosure provides an image encoding apparatus and an image decoding apparatus.
(20) As shown in
(21) As shown in
(22) The image encoder 11 encodes the inputted image data x to obtain a latent variable z. The image encoder 11 may perform encoding processing based on a deep neural network. For example, the image encoder 11 may be implemented via a basic convolution layer and/or a deconvolution layer, and/or by taking generalized divisive normalization (GDN)/inverse generalized divisive normalization (IGDN) as an activation function. Reference may be made to related techniques for a concept and contents of the deep neural network.
(23) The quantizer 12 may perform quantizing processing according to a quantization step Q on the latent variable z outputted by the image encoder 11 to generate a quantized latent variable {circumflex over (z)}_enc. The latent variable z is float data, and the float data are transformed into data with finite lengths.
(24) The entropy encoder 13 performs entropy coding on the quantized latent variable {circumflex over (z)}_enc by using an entropy model 14 to form the bit stream 100. The bit stream 100 may also be referred to as a bit stream, and is a data stream containing multiple bits. Through the entropy coding, the quantized latent variable {circumflex over (z)}_enc that is difficult to be stored and transmitted is converted into the bit stream 100 that is easy to be stored and transmitted. In addition, entropy coding is coding based on the entropy principle without losing information. Therefore, the information contained in the bit stream 100 may completely reflect information in the quantized latent variable {circumflex over (z)}_enc.
(25) In at least one embodiment, the entropy model 14 may be used to estimate entropy of the latent variable z, and the entropy encoder 13 may perform entropy coding on a result of the entropy estimation of the latent variable z based on the entropy model 14. The entropy model 14 may be, for example, a factorized entropy model.
(26) The bit rate R of the bit stream 100 generated by the entropy encoder 13 may be expressed as R=n/(W*H); where, n denotes the length of the bit stream 100, and W and H respectively denote a width and a length of an image to which the image data x correspond, both the width and length being expressed by the number of pixels.
(27) The bit stream 100 generated by the entropy encoder 13 may be stored or transmitted to the image decoding apparatus 2.
(28) As shown in
(29) The entropy decoder 23 performs entropy decoding on the received bit stream 100 by using the entropy model 14 to form the quantized latent variable {circumflex over (z)}_enc. The processing of the entropy decoding may be reverse processing of the entropy coding processing of the entropy encoder 13.
(30) The de-quantizer 22 performs de-quantizing processing on the quantized latent variable {circumflex over (z)}_enc according to the quantization step Q to generate the reconstructed latent variable {circumflex over (z)}. The de-quantizing processing may be inverse processing of the quantizing processing.
(31) The image decoder 21 performs decoding processing on the reconstructed latent variable {circumflex over (z)} to obtain restored image data 2. The image decoder 21 may perform the decoding processing based on a deep neural network. For example, the image decoder 21 may be implemented via a basic convolution layer and/or a deconvolution layer, and/or by taking generalized divisive normalization (GDN)/inverse generalized divisive normalization (IGDN) as an activation function. Reference may be made to related techniques for a concept and contents of the deep neural network.
(32) In at least one embodiment, the image encoder 11 and the image decoder 21 may be an image encoder and image decoder based on a rate-distortion optimization guided autoencoder for generative analysis (RaDOGAGA) model. Reference may be made to related techniques for a detailed principle of the RaDOGAGA model, such as that described on the following webpage: https://arxiv.org/abs/1910.04329.
(33) In at least one embodiment, the image encoder 11 and the image decoder 21 may be trained by using a training device based on the RaDOGAGA model.
(34)
(35) As shown in
z=f.sub.θ(x) (1);
(36) where, f.sub.θ denotes the encoding processing of the image encoder 11, the encoding processing taking θ as a parameter.
(37) The second acquiring unit 320 acquires first restored image data {circumflex over (x)} obtained by the image decoder 21 by decoding the latent variable z and acquires second restored image data x̆ obtained by the image decoder 21 by decoding a sum (z+ε) of the latent variable z and a noise ε. For example, {circumflex over (x)} and x̆ may be expressed as the following formula (2):
{circumflex over (x)}=g.sub.ϕ(z),x̆=g.sub.ϕ(z+ε) (2);
(38) where, g.sub.ϕ denotes the decoding processing of the image decoder 21, the encoding processing taking ϕ as a parameter. In addition, the noise E may be a uniform noise.
(39) The training unit 33 trains the image encoder 11 and the image decoder 21 according to a cost function L, the cost function L being related to a deviation (h(D(x,{circumflex over (x)}))) between the input training image data x and the first restored image data {circumflex over (x)} and a deviation (D({circumflex over (x)},x̆)) between the first restored image data {circumflex over (x)} and the second restored image data x̆. Furthermore, training the image encoder 11 and the image decoder 21 by the training unit 33 refers to that the training unit 33 trains a network in the image encoder 11 and a network in the image decoder 21.
(40) In at least one embodiment, the cost function L may be expressed as the following equation (3):
L=−log(P.sub.z,ψ(z))+λ.sub.1×h(D(x,{circumflex over (x)}))+λ.sub.2×D({circumflex over (x)},x̆) (3).
(41) In the first term log(P.sub.z,ψ(z)) of formula (3), P.sub.z,ψ(z) denotes a probability of the latent variable z, which takes latent variables z and ψ as parameters. A cumulative density function (CDF) of the latent variable z may be obtained by the entropy model 14 in
(42) Furthermore, in the entropy model 14, the cumulative density function CDF may conform to a relationship shown in the following formulae (4a) and (4b):
(43)
(44) where, α denotes a quantization step of a bit rate of the latent variable z, and R.sub.z denotes the bit rate of the latent variable z. H and W respectively denote a height and width of the input image.
(45) In formula (3), the second term λ.sub.1×h(D(x,{circumflex over (x)})) is used to calculate reconstruction losses of the image encoder 11 and the image decoder 21, and the third term λ.sub.2×D({circumflex over (x)},x̆) reflects a scaling relationship between an image and a latent space. λ.sub.1 is used to control a degree of reconstruction, and λ.sub.2 is used to control a scaling ratio between the image and the latent space.
(46) In the second term λ.sub.1×h(D({circumflex over (x)},x̆) and third term λ.sub.2×D({circumflex over (x)},x̆) of formula (3), D(x.sub.1, x.sub.2) is a distortion function of a difference between x.sub.1 and x.sub.2. Deformation parameters used in the field of image encoding may be a mean square error (MSE), a peak signal-to-noise ratio (PSNR), a multi-scale structural similarity (MS-SSIM) index, or a structural similarity (SSIM) index. Corresponding to the aforementioned deformation parameters, the deformation function D(x.sub.1,x.sub.2) may be a mean square error (MSE) deformation function, a peak signal-to-noise ratio (PSNR) deformation function, a multi-scale structural similarity (MS-SSIM) index deformation function, or a structure similarity (SSIM)) index deformation function.
(47) In the second term of formula (3), h(D) may be log(D). Hence, a curve of the loss function is steeper around log(D)=0, so that the image encoder 11 and the image decoder 21 may get better reconstruction characteristics and orthogonality. However, this disclosure is not limited thereto, and h(D) may also be D.
(48) In a particular example, a shape of the input training image x is H*W*3; where, H is the height of the training image x, W is the width of the training image x, and 3 denotes 3 channels; a value of the noise c is between −0.5˜0.5, and a value of a is 0.2; in the image encoder 11, a shape of each generated feature image is of H/16*W/16; in a first stage of training, a minimum mean square error (MSE) deformation function is used as the deformation function, h(D)=D; and in a second stage of training, a multi-scale structural similarity (MS-SSIM) index deformation function MS.sub.SSIM(x.sub.1, x.sub.2) is used as the deformation function D (x.sub.1, x.sub.2), h(D)=log(D), that is, in the second stage of training, the image encoder 11 and the image decoder 21 are trained by using a loss function L of the following formula (5):
L=log(P.sub.z,ψ(z))+λ.sub.1×log(1−MS.sub.SSIM(x,{circumflex over (x)}))+λ.sub.2×MS.sub.SSIM({circumflex over (x)},x̆) (5).
(49) In formula (5), λ.sub.1 may be 1, and λ.sub.2 may be greater than 100.
(50) A process of training the image encoder 11 and the image decoder 21 by the training device 3 is described above with reference to
(51) In the first aspect of the embodiments of this disclosure, with the training of the training device 3, the image encoder 11 and the image decoder 21 may be obtained, and the image encoding apparatus 1 with the image encoder 11 may easily achieve adjustment of different bit rates. Furthermore, the image decoding apparatus 2 having the image decoder 21 may be adapted to different bit rates.
(52) Operations of the image encoding apparatus 1 and the image decoding apparatus 2 related to the quantizing processing shall be described below.
(53) In at least one embodiment, the quantizing processing of the quantizer 12 may be non-uniform quantizing processing. The non-uniform quantizing processing may include: taking the latent variable z to which a probability distribution peak value (or center value) of the latent variable z corresponds as a zero point, and making the latent variable z in a first range containing the zero point correspond to the first quantized latent variable {circumflex over (z)}_enc; for other quantized latent variables {circumflex over (z)}_enc than the first quantized latent variable {circumflex over (z)}_enc, each quantized latent variable {circumflex over (z)}_enc corresponds to the latent variable z in a second range, the second range being not greater than the first range. The probability distribution peak value of the latent variable z may be obtained based on the entropy model 14.
(54) For example, the quantizer 12 may perform the quantizing processing by using the following formula (6):
(55)
(56) where, sign (z) denotes a symbol of the latent variable z, for example, if z is greater than 0, sign (z) is positive, and if z is less than 0, sign (z) is negative; floor ( ) denotes rounding down, abs (z) denotes that an absolute value of z is taken; and offset is a preset offset, 0offset
0.5.
(57) In this disclosure, offset may be used to set a length of the first range, that is, the length of the first range is 2*(1−offset)*Q. A length of the second range is equal to the quantization step Q.
(58) In at least one embodiment, the offset is not equal to 0.5, the length of the second range is less than the length of the first range, and the quantizing processing performed by the quantizer 12 is non-uniform quantizing processing. Therefore, after the quantizing processing, the entropy of the quantized latent variable {circumflex over (z)}_enc is smaller. In addition, this disclosure is not limited thereto. For example, when the offset is equal to 0.5, the length of the second range is equal to the length of the first range, and the quantizing processing performed by the quantizer 12 is uniform quantizing processing.
(59) The quantized latent variable {circumflex over (z)}_enc generated by the quantizer 12 is subjected to entropy coding by the entropy encoder 13 to form a bit stream 100. The bit stream 100 is entropy-decoded by the entropy decoder 23, so that the quantized latent variable {circumflex over (z)}_enc is obtained in the image decoding apparatus 2.
(60) In at least one embodiment, the de-quantizer 22 may perform de-quantizing processing by using the quantization step Q. For example, the de-quantizer 22 may de-quantize the quantized latent variable {circumflex over (z)}_enc outputted by the entropy decoder 23 by using the following formula (7), thereby obtaining the reconstructed latent variable {circumflex over (z)}:
{circumflex over (z)}={circumflex over (z)}_enc.Math.Q (7).
(61) Based on the entropy model 14, a cumulative density function (CDF) of the reconstructed latent variable {circumflex over (z)} may be obtained, z is quantized by the quantizer 12, and z may be quantized to the corresponding representative value {circumflex over (z)} based on the quantization step. A high bound of an interval of z to which {circumflex over (z)} corresponds is z.sub.high, and a lower bound thereof is z.sub.low, that is, z in the interval [z.sub.low, z.sub.high] will be all quantized to corresponding {circumflex over (z)}; where, {circumflex over (z)}_enc={circumflex over (z)}/Q, and 0<ω<1.
z.sub.high=({circumflex over (z)}_enc+0.5+sign(sign({circumflex over (z)}_enc)+ω)×(0.5−offset))×Q (8),
z.sub.low=({circumflex over (z)}_enc+0.5−sign(sign({circumflex over (z)}_enc)−ω)×(0.5−offset))×Q (9).
(62) According to z.sub.high and z.sub.low, a bit rate R.sub.{circumflex over (z)} of the reconstructed latent variable {circumflex over (z)} may be obtained by using formula (10) below:
(63)
(64)
(65) As shown in
(66) As shown in
(67) As shown in
(68) As shown in
(69) In the image encoding apparatus 1 of this disclosure, the image encoder 11 is an image encoder based on an RaDOGAGA model. By adjusting the quantization step Q, the bit rate can be adjusted, so that the bit rate adjustment may be performed conveniently and quickly. While in a traditional method, the value of the loss function λ needs to be modified multiple times, and corresponding to each value of λ, it is needed to retrain the network of the encoder and determine a network of the encoder with a bit rate closest to a needed bit rate, and a process for adjusting a bit rate is relatively cumbersome.
(70) In order to compare a performance of the image encoding apparatus 1 of this disclosure and that of a traditional image encoding apparatus, experiments were performed on the image encoding apparatus 1 of this disclosure and the traditional image encoding apparatus based on a universal test data set Kodak, and bit rate-distortion (R-D) curves of the two were drawn respectively. The traditional image encoding apparatus adopts an encoding network structure identical to that of Bane [2017], for example. In order to draw the R-D curve of the traditional image encoding apparatus, for different λ□{4, 8, 16, 32, 64, 96}, image codec networks were trained separately, and a deformation parameter MS-SSIM.sub.dB was used to denote degrees of distortion of the image codec networks; where, MS_SSIM.sub.dB=−10 log.sub.2 (1−MS_SSIM). Rs and Ds to the 6 image codec networks respectively correspond were fitted into a first curve.
(71) For the image encoding apparatus 1 of this disclosure, a network structure of the image encoder 11 did not need to be trained multiple times, but the quantization steps Q were adjusted; where, Q∈{0.5,0.75,1,1.25,1.5,1.75,2,2.5,3,3.5,4}, and the Rs and Ds to which the quantization steps correspond were calculated. The Rs and Ds to which the quantization steps Q respectively correspond were fitted to a second curve.
(72)
(73) In
(74) As shown in
(75) Embodiment of the Second Aspect
(76) The embodiment of this disclosure provides an image encoding method, an image decoding method and a training method.
(77)
(78) operation 51: an image encoder encodes input image data x to obtain a latent variable z;
(79) operation 52: a quantizer performs quantizing processing on the latent variable z according to a quantization step Q to generate a quantized latent variable; and
(80) operation 53: an entropy encoder performs entropy coding on the quantized latent variable by using an entropy model to form a bit stream.
(81) As shown in
(82) operation 54: a first quantization step adjuster adjusts the quantization step Q to adjust a bit rate of the bit stream.
(83) In at least one embodiment, the quantizing processing of the quantizer is non-uniform quantizing processing. The non-uniform quantizing processing includes:
(84) taking a latent variable z to which a probability distribution peak value of the latent variable z corresponds as a zero point, a latent variable z in a first range containing the zero point corresponding to a first quantized latent variable; and for other quantized latent variables than the first quantized latent variable, the other quantized latent variables corresponding to latent variables z of a second range, the second range being not greater than the first range.
(85) The probability distribution peak value of the latent variable z is obtained based on the entropy model.
(86) Reference may be made to the description of corresponding units in
(87)
(88) operation 61: an entropy decoder performs entropy decoding on a bit stream by using an entropy model to form a quantized latent variable;
(89) operation 62: a de-quantizer performs de-quantizing processing on the quantized latent variable according to a quantization step to generate a reconstructed latent variable; and
(90) operation 63: an image decoder performs decoding processing on the reconstructed latent variable to obtain restored image data.
(91) The de-quantizer in operation 62 performs the de-quantizing processing according to the quantization step.
(92) As shown in
(93) operation 64: a second quantization step adjuster adjusts the quantization step Q.
(94) Reference may be made to the description of corresponding units in
(95)
(96) operation 71: a latent variable obtained by the image encoder by encoding input training image data is acquired;
(97) operation 72: first restored image data obtained by the image decoder by decoding the latent variable and second restored image data obtained by the image decoder by decoding a sum (z+ε) of the latent variable z and a noise E are acquired; and
(98) operation 73: the image encoder and the image decoder are trained according to a cost function L, the cost function L being related to a deviation between the input training image data x and the first restored image data and a deviation between the first restored image data and the second restored image data.
(99) Reference may be made to the description of corresponding units in
(100) Embodiment of the Third Aspect
(101) The embodiment of this disclosure provides an electronic device, including the image encoding apparatus 1, and/or the image decoding apparatus 2, and/or the training device 3, described in the embodiment of the first aspect, the contents of which being incorporated herein. The electronic device may be, for example, a computer, a server, a work station, a lap-top computer, and a smart mobile phone, etc.; however, the embodiment of this disclosure is not limited thereto.
(102)
(103) In an embodiment, functions of the image encoding apparatus 1 and/or the image decoding apparatus 2 and/or the training device 3 may be integrated into the processor 810. The processor 810 may be configured to carry out the image encoding method and/or the image decoding method and/or the training method as described in the embodiment of the second aspect.
(104) In another embodiment, the image encoding apparatus 1 and/or the image decoding apparatus 2 and/or the training device 3 and the processor 810 may be configured separately. For example, the image encoding apparatus 1 and/or the image decoding apparatus 2 and/or the training device 3 may be configured as a chip connected to the processor 810, and the functions of the image encoding apparatus 1 and/or the image decoding apparatus 2 and/or the training device 3 are executed under control of the processor 810.
(105) Reference may be made to embodiments 1 and 2 for particular implementation of the processor 810, which shall not be described herein any further.
(106) Furthermore, as shown in
(107) An embodiment of the present disclosure provides a computer readable program code, which, when executed in an image encoding apparatus and/or an image decoding apparatus and/or a training device, will cause a computer to carry out the image encoding method and/or the image decoding method and/or the training method described in the embodiment of the second aspect in the image encoding apparatus and/or the image decoding apparatus and/or the training device.
(108) An embodiment of the present disclosure provides a computer storage medium, including a computer readable program code, which will cause a computer to carry out the image encoding method and/or the image decoding method and/or the training method described in the embodiment of the second aspect in an image encoding apparatus and/or an image decoding apparatus and/or a training device.
(109) The image encoding apparatus or the image decoding apparatus or the training device described with reference to the embodiments of this disclosure may be directly embodied as hardware, software modules executed by a processor, or a combination thereof. For example, one or more functional block diagrams and/or one or more combinations of the functional block diagrams shown in the drawings may either correspond to software modules of procedures of a computer program, or correspond to hardware modules. Such software modules may respectively correspond to the steps shown in the drawings. And the hardware module, for example, may be carried out by firming the soft modules by using a field programmable gate array (FPGA).
(110) The soft modules may be located in an RAM, a flash memory, an ROM, an EPROM, and EEPROM, a register, a hard disc, a floppy disc, a CD-ROM, or any memory medium in other forms known in the art. A memory medium may be coupled to a processor, so that the processor may be able to read information from the memory medium, and write information into the memory medium; or the memory medium may be a component of the processor. The processor and the memory medium may be located in an ASIC. The soft modules may be stored in a memory of an image encoding apparatus or an image decoding apparatus, and may also be stored in a memory card of an image encoding apparatus or an image decoding apparatus.
(111) One or more functional blocks and/or one or more combinations of the functional blocks in the drawings may be realized as a universal processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware component or any appropriate combinations thereof carrying out the functions described in this application. And the one or more functional block diagrams and/or one or more combinations of the functional block diagrams in the drawings may also be realized as a combination of computing equipment, such as a combination of a DSP and a microprocessor, multiple processors, one or more microprocessors in communication combination with a DSP, or any other such configuration.
(112) This disclosure is described above with reference to particular embodiments. However, it should be understood by those skilled in the art that such a description is illustrative only, and not intended to limit the protection scope of the present disclosure. Various variants and modifications may be made by those skilled in the art according to the principle of the present disclosure, and such variants and modifications fall within the scope of the present disclosure.
(113) For implementations of this disclosure containing the above embodiments, following supplements are further disclosed.
(114) 1. A training device for an image processing apparatus, in which an image encoder and an image decoder are trained by using a training image, the training device including:
(115) a first acquiring unit configured to acquire a latent variable z obtained by the image encoder by encoding input training image data;
(116) a second acquiring unit configured to acquire first restored image data obtained by the image decoder by decoding the latent variable z and second restored image data obtained by the image decoder by decoding a sum (z+ε) of the latent variable z and a noise ε; and
(117) a training unit configured to train the image encoder and the image decoder according to a cost function L, the cost function L being related to a deviation between the input training image data x and the first restored image data and a deviation between the first restored image data and the second restored image data.
(118) 2. An image encoding apparatus, including:
(119) an image encoder configured to encode input image data x to obtain a latent variable z, the image encoder being obtained by training by the training device as described in the first aspect;
(120) a quantizer configured to perform quantizing processing on the latent variable z according to a quantization step Q to generate a quantized latent variable; and
(121) an entropy encoder configured to perform entropy coding on the quantized latent variable by using an entropy model to form a bit stream.
(122) 3. The image encoding apparatus according to supplement 2, wherein the image encoding apparatus further includes:
(123) a first quantization step adjuster configured to adjust the quantization step Q to adjust a bit rate of the bit stream.
(124) 4. The image encoding apparatus according to supplement 2, wherein,
(125) the quantizing processing of the quantizer is non-uniform quantizing processing.
(126) 5. The image encoding apparatus according to supplement 4, wherein,
(127) the non-uniform quantizing processing includes:
(128) taking a latent variable z to which a probability distribution peak value of the latent variable z corresponds as a zero point, a latent variable of a first range containing the zero point corresponding to a first quantized latent variable; and
(129) for other quantized latent variables than the first quantized latent variables, the other quantized latent variables corresponding to latent variables z of a second range, the second range being not greater than the first range.
(130) 6. The image encoding apparatus according to supplement 5, wherein,
(131) the probability distribution peak value of the latent variable z is obtained based on the entropy model.
(132) 7. An image decoding apparatus, characterized in that the image decoding apparatus includes:
(133) an entropy decoder configured to perform entropy decoding on a bit stream by using an entropy model to form a quantized latent variable;
(134) a de-quantizer configured to perform de-quantizing processing on the quantized latent variable according to a quantization step Q to generate a reconstructed latent variable; and
(135) an image decoder configured to perform decoding processing on the reconstructed latent variable to obtain restored image data {circumflex over (x)}, the image decoder being obtained by training by the training device as described in supplement 1.
(136) 8. The image decoding apparatus according to supplement 7, wherein,
(137) the de-quantizer performs the de-quantizing processing according to the quantization step.
(138) 9. The image decoding apparatus according to supplement 7, wherein the image decoding apparatus further includes:
(139) a second quantization step adjuster configured to adjust the quantization step Q.
(140) 10. A training method for an image processing apparatus, in which an image encoder and an image decoder are trained by using a training image, the training method including:
(141) acquiring a latent variable z obtained by the image encoder by encoding input training image data;
(142) acquiring first restored image data obtained by the image decoder by decoding the latent variable z and second restored image data obtained by the image decoder by decoding a sum (z+ε) of the latent variable z and a noise ε; and
(143) training the image encoder and the image decoder according to a cost function L, the cost function L being related to a deviation between the input training image data x and the first restored image data and a deviation between the first restored image data and the second restored image data.
(144) 11. An image encoding method, including:
(145) encoding input image data x by an image encoder to obtain a latent variable z, the image encoder being obtained in the training method described in supplement 10;
(146) performing quantizing processing on the latent variable z by a quantizer according to a quantization step Q to generate a quantized latent variable; and
(147) performing entropy coding on the quantized latent variable by an entropy encoder by using an entropy model to form a bit stream.
(148) 12. The image encoding method according to supplement 11, wherein the image encoding method further includes:
(149) adjusting the quantization step Q by a first quantization step adjuster to adjust a bit rate of the bit stream.
(150) 13. The image encoding method according to supplement 11, wherein,
(151) the quantizing processing of the quantizer is non-uniform quantizing processing.
(152) 14. The image encoding method according to supplement 13, wherein,
(153) the non-uniform quantizing processing includes:
(154) taking a latent variable z to which a probability distribution peak value of the latent variable z corresponds as a zero point, a latent variable z of a first range containing the zero point corresponding to a first quantized latent variable; and
(155) for other quantized latent variables than the first quantized latent variables, the other quantized latent variables corresponding to latent variables z of a second range, the second range being not greater than the first range.
(156) 15. The image encoding method according to supplement 14, wherein,
(157) the probability distribution peak value of the latent variable z is obtained based on the entropy model.
(158) 16. An image encoding method, including:
(159) performing entropy decoding on a bit stream by an entropy decoder by using an entropy model to form a quantized latent variable;
(160) performing de-quantizing processing on the quantized latent variable by a de-quantizer according to a quantization step to generate a reconstructed latent variable; and
(161) performing decoding processing on the reconstructed latent variable by an image decoder to obtain restored image data.
(162) 17. The image decoding method according to supplement 16, wherein,
(163) the de-quantizer performs the de-quantizing processing according to the quantization step.
(164) 18. The image decoding method according to supplement 16, wherein the image decoding method further includes:
(165) adjusting the quantization step Q by a second quantization step adjuster.