Methods and apparatuses for encoding and decoding digital light field images
10887608 ยท 2021-01-05
Assignee
Inventors
Cpc classification
H04N19/88
ELECTRICITY
H04N13/161
ELECTRICITY
H04N13/232
ELECTRICITY
H04N13/229
ELECTRICITY
H04N19/649
ELECTRICITY
H04N19/46
ELECTRICITY
International classification
H04N19/46
ELECTRICITY
H04N13/229
ELECTRICITY
H04N13/232
ELECTRICITY
H04N13/161
ELECTRICITY
Abstract
A method for encoding a raw lenselet image includes a receiving phase, wherein at least a portion of a raw lenselet image is received, the image including a plurality of macro-pixels, each macro-pixel having pixels corresponding to a specific view angle for the same point of a scene, and an output phase, wherein a bitstream having at least a portion of an encoded lenselet image is outputted. The method has an image transform phase, wherein the pixels of said raw lenselet image are spatially displaced in a transformed multi-color image having a larger number of columns and rows with respect to the received raw lenselet image, wherein dummy pixels having undefined value are inserted into the raw lenselet image and wherein the displacement is performed so as to put the estimated center location of each macro-pixel onto integer pixel locations. Moreover, the method includes a sub-view generation phase, wherein a sequence of sub-views is generated, said sub-views having pixels of the same angular coordinates extracted from different macro-pixels of the transformed raw lenselet image. Finally, the method has a graph coding phase, wherein a bitstream is generated by encoding a graph representation of at least one of the sub-views of the sequence according to a predefined graph signal processing technique.
Claims
1. A method for encoding a raw lenselet image (f), comprising: a receiving phase, wherein at least a portion of a raw lenselet image (f) is received, said image (f) comprising a plurality of macro-pixels, each macro-pixel comprising pixels corresponding to a specific view angle for the same point of a scene; an output phase, wherein a bitstream (f.sub.d{circumflex over ()}) comprising at least a portion of an encoded lenselet image (f) is outputted; an image transform phase, wherein the pixels of said raw lenselet image (f) are spatially displaced in a transformed multi-color image that includes the colors red, blue and green and has a larger number of columns and rows with respect to the received raw lenselet image, wherein dummy pixels having undefined value are inserted into said raw lenselet image (f) and wherein said displacement is performed so as to put the estimated center location of each macro-pixel onto integer pixel locations; a sub-view generation phase, wherein a sequence of sub-views (f.sub.d) is generated, said sub-views comprising pixels of the same angular coordinates extracted from different macro-pixels of said transformed raw lenselet image (f); a graph coding phase, wherein a bitstream (f.sub.d{circumflex over ()}) is generated by encoding a graph representation of at least one of the sub-views of said sequence (f.sub.d) according to a predefined graph signal processing (GSP) technique, wherein said output phase comprises outputting said graph-coded bitstream (f.sub.d.sup.) for its transmission or storage.
2. The encoding method according to claim 1, wherein said spatially displacement comprises at least a rotation or a translation or scaling operation.
3. The encoding method according to claim 1, wherein said graph representation is generated basing on said sequence (f.sub.d) of a plurality of said sub-views organized in group of pictures (GOP) structures.
4. The encoding method according to claim 1, wherein in said sub-views graph generating phase the sub-views sequence (f.sub.d) is divided into multiple GOP consists of a predefined number G of sub-views.
5. The encoding method according to claim 3, wherein said representation of the sub-views sequence (f.sub.d) is generated by considering intra-view or inter-view correlations among the sub-views into the group of pictures (GOP) structure such that each node of said graph representation is connected to a predefined number of nearest nodes in terms of Euclidean distance within the same sub-view and the reference sub-view into the GOP structure.
6. The encoding method according to claim 1, wherein said sub-view generation phase comprises: a subaperture generating phase, wherein a subaperture image comprising a plurality of sub-views is generated, by composing each sub-view such that the pixel having the same relative position with respect to each macro-pixel center is used; a subaperture rearranging phase, wherein a sequence (f.sub.d) of sub-views is generated by rearranging the sub-views composing the generated subaperture image, on the basis of at least one predefined order.
7. The encoding method according to claim 6, wherein in said subaperture rearranging phase said predefined order is a raster scan order, or a helicoidal order, or a zig-zag order or a chess-like order.
8. The encoding method according to claim 1, wherein in said graph coding phase said bitstream (f.sub.d{circumflex over ()}) is generated by encoding said graph representation of the sub-views sequence (f.sub.d) according to the Graph Fourier transform (GFT) or the Graph based Lifting Transform (GLT).
9. A method for decoding a bitstream comprising at least one encoded raw lenselet image (f), comprising: a receiving phase, wherein a graph-coded bitstream (f.sub.d{circumflex over ()}) of a sub-views sequence (f.sub.d) is received, wherein dummy pixels having undefined value are inserted into said raw lenselet image (f) and wherein each sub-view comprises pixels of the same angular coordinates extracted from different macro-pixels of said raw lenselet image (f), each macro-pixel comprising pixels corresponding to a specific view angle for the same point of a scene; an output phase, wherein the reconstructed light field image (f.sub.d{tilde over ()}) is outputted or displayed; a graph decoding phase, wherein said graph-coded bitstream (f.sub.d{circumflex over ()}) is decoded according to a predefined graph signal processing (GSP) technique, outputting a reconstructed sub-views sequence (f.sub.d), wherein said sub-views comprise dummy pixels situated in pixel locations having undefined color value; a demosaicing filter phase, wherein a full-color demosaiced lenselet image that includes the colors red, blue and green is generated through a color interpolation by applying a demosaicing technique to said sub-views sequence (f.sub.d); a raw lenselet rearrangement phase, wherein a full-color subaperture image (f.sub.d) that includes the colors red, blue and green is obtained basing on said demosaiced full-color lenselet image; wherein said output phase comprises outputting said generated full-color subaperture image (f.sub.d).
10. The decoding method according to claim 9, wherein said lenselet rearrangement phase comprises an image reconstructing phase wherein a reconstructed subaperture image comprising a plurality of sub-views is generated rearranging said reconstructed sub-view in the sequence (f.sub.d) on the basis of at least one predefined order; a lenselet image reconstructing phase, wherein a reconstructed lenselet image is generated such that the pixels of each sub-view are located in the corresponding macro-pixels on the basis of the order used in the encoding of the received raw lenselet image (f).
11. The decoding method according to claim 9, wherein in said lenselet rearrangement phase said predefined order is a raster scan order, or a helicoidal order, or a zig-zag order or a chess-like order.
12. The decoding method according claim 9, wherein in said graph decoding phase said reconstructed sub-views sequence (f.sub.d) is generated by decoding said graph representation of the sub-views sequence (f.sub.d) according to the Graph Fourier transform (GFT) or the Graph based Lifting Transform (GLT).
13. An apparatus for encoding a raw lenselet image (f), comprising: an input unit configured for acquiring at least a portion of a raw lenselet image (f) from a source, comprising a plurality of macro-pixels, each macro-pixel comprising pixels corresponding to a specific view angle for the same point of a scene; an output unit configured for outputting at least a portion of a resulting bitstream (f.sub.d{circumflex over ()}); at least one processing unit configured for executing a set of instructions for encoding said raw lenselet image (f); a memory unit containing image data relating to said a raw lenselet image (f) and the result (f.sub.d, f.sub.d{circumflex over ()}) of the execution of said encoding instructions, wherein said at least one processing unit is configured for spatially displacing the pixels of said raw lenselet image (f) in a new multi-color image that includes the colors red, blue and green and has a larger number of columns and rows with respect to the received raw lenselet image, wherein dummy pixels are inserted in the pixel locations having undefined color channel value and wherein said displacement is performed so as to put the estimated center location of each macro-pixel onto integer pixel locations; wherein said at least one processing unit is further configured for generating a graph representation of a sequence of sub-views (f.sub.d) starting from said raw lenselet image (f), each sub-view comprising pixels of the same angular coordinates extracted from different macro-pixels of said raw lenselet image (f); wherein said at least one processing unit is further configured for fetching the graph representation of the sequence of sub-views (f.sub.d) from said memory unit, and for executing a graph signal processing (GSP) technique for coding said sequence of sub-views (f.sub.d), and storing the resulting bitstream (f.sub.d{circumflex over ()}) into said memory unit.
14. The encoding apparatus according to claim 13, wherein said at least one processing unit is further configured for: generating said sequence of sub-views (f.sub.d) by forming a subaperture image comprising a plurality of sub-views by composing each sub-view such that the pixel having the same relative position with respect to each macro-pixel center is used, and rearranging said sequence (f.sub.d) of sub-views composing the generated subaperture image on the basis of at least one predefined order.
15. The encoding apparatus according to claim 14, wherein said predefined order is a raster scan order, or a helicoidal order, or a zig-zag order, or a chess-like order.
16. The encoding apparatus according to claim 13, wherein said graph signal processing (GSP) technique is the Graph Fourier transform (GFT) or the Graph based Lifting Transform (GLT).
17. An apparatus for decoding an encoded raw lenselet image comprising: an input unit configured to read a graph-coded bitstream (f.sub.d{circumflex over ()}) of a sub-views sequence (f.sub.d), wherein dummy pixels having undefined value are inserted into said raw lenselet image (f) and wherein each sub-view comprises pixels of the same angular coordinates extracted from different macro-pixels of said raw lenselet image (f), each macro-pixel comprising pixels corresponding to a specific view angle for the same point of a scene (f.sub.d{circumflex over ()}) from a communication channel or storage media, an output unit which reproduces or outputs the processed light field images or video streams (f.sub.d{tilde over ()}); at least one processing unit configured for executing a set of instruction for decoding said encoded images or video streams (f.sub.d{circumflex over ()}); a memory unit containing image data relating to said encoded images or video streams (f.sub.d{circumflex over ()}) and the result of the execution of said instructions for decoding; said at least one processing unit being configured for receiving and decoding the bitstream of the sub-views sequence (f.sub.d) according to a predefined graph signal processing (GSP) technique, such that a reconstructed sub-views sequence (f.sub.d) is recovered, wherein said sub-views comprise dummy pixels situated in pixel locations having undefined color value; said at least one processing unit being configured for receiving said reconstructed sub-views sequence (f.sub.d) and to generate a full-color demosaiced lenselet image that includes the colors red, blue and green through a color interpolation by applying a demosaicing technique to said sub-views.
18. The decoding apparatus according to claim 17, wherein said at least one processing unit is configured for: rearranging said sub-views on the basis of at least one predefined order for obtaining a reconstructed subaperture image (f.sub.d) comprising a plurality of sub-views, and; reconstructing a lenselet image such that the pixels of each sub-view are located in the corresponding macro-pixels on the basis of the order used in the encoding of the received raw lenselet image (f).
19. A non-transitory computer readable medium operable with a digital processing device, and which comprises portions of software code for executing the method according to claim 1.
Description
BRIEF DESCRIPTION OF DRAWING
(1) The characteristics and other advantages of the present invention will become apparent from the description of an embodiment illustrated in the appended drawings, provided purely by way of no limiting example, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
DETAILED DESCRIPTION OF THE INVENTION
(20) In this description, any reference to an embodiment will indicate that a particular configuration, structure or feature described in regard to the implementation of the invention is comprised in at least one embodiment. Therefore, the phrase in an embodiment and other similar phrases, which may be present in different parts of this description, will not necessarily be all related to the same embodiment. Furthermore, any particular configuration, structure or feature may be combined in one or more embodiments in any way deemed appropriate.
(21) The references below are therefore used only for sake of simplicity, and do not limit the protection scope or extension of the various embodiments.
(22) With reference to
(23) The video source 1000 can be either a provider of live images, such as a light field camera, or a provider of stored contents such as a disk or other storage and memorization devices. The Central Processing Unit (CPU) 1010 takes care of activating the proper sequence of operations performed by the units 1020, 1040, in the encoding process performed by the apparatus 1005.
(24) These units can be implemented by means of dedicated hardware components (e.g. CPLD, FPGA, or the like) or can be implemented through one or more sets of instructions which are executed by the CPU 1010; in the latter case, the units 1020, 1040 are just logical (virtual) units.
(25) When the apparatus 1005 is in an operating condition, the CPU 1010 first fetches the light field image f from the video source 1000 and loads it into the memory unit 1030.
(26) Next, the CPU 1010 activates the pre-processing unit 1020, which fetches the raw lenselet image f from the memory 1030, executes the phases of the method for pre-process the raw lenselet image f according to an embodiment of the invention (see
(27) Successively, the CPU 1010 activates the graph coding unit 1040, which fetches from the memory 1030 the graph representation of the sequence of sub-views f.sub.d, executes the phases of the method for encode the sequence of sub-views f.sub.d according to a graph signal processing (GSP) techniques such as the Graph Fourier transform (GFT) or the Graph based Lifting Transform (GLT), and stores the resulting bitstream f.sub.d{circumflex over ()} back into the memory unit 1030.
(28) At this point, the CPU 1010 may dispose of the data from the memory unit 1030 which are not required anymore at the encoder 1005.
(29) Finally, the CPU 1010 fetches the bitstream f.sub.d{circumflex over ()} from memory 1030 and puts it into the channel or saves it into the storage media 1195.
(30) With reference also to
(31) As for the previously described encoding apparatus 1005, also the CPU 1110 of the decoding apparatus 1100 takes care of activating the proper sequence of operations performed by the units 1120, 1130 and 1150.
(32) These units can be implemented by means of dedicated hardware components (e.g. CPLD, FPGA, or the like) or can be implemented through one or more sets of instructions stored in a memory unit which are executed by the CPU 1110; in the latter case, the units 1120, 1130 and 1150 are just a logical (virtual) units.
(33) When the apparatus 1100 is in an operating condition, the CPU 1110 first fetches the bitstream f.sub.d{circumflex over ()} from the channel or storage media 1095 via any possible input unit and loads it into the memory unit 1140.
(34) Then, the CPU 1110 activates the graph decoding unit 1120, which fetches from the memory 1140 the bitstream f.sub.d{circumflex over ()}, executes phases of the method for decoding the bitstream f.sub.d{circumflex over ()} of the sub-views sequence according to a predefined graph signal processing (GSP) technique, such as the Graph Fourier transform (GFT) or the Graph based Lifting Transform (GLT), outputs the reconstructed sub-views sequence f.sub.d, and loads it into the memory unit 1140.
(35) Any GSP technique can be used according to the invention; important is that the same technique is used in the encoding and decoding apparatus 1100 for assuring a correct reconstruction of the original light field image.
(36) Successively, the CPU 1110 activates the demosaicing unit 1150, which fetches from the memory 1140 the reconstructed sub-views sequence f.sub.d, and executes phases of the method for generating a full-color subaperture image f.sub.d according to the invention, and loads it into the memory unit 1140.
(37) Then, the CPU 1110 activates the post-processing unit 1130, which fetches from the memory 1140 the full-color subaperture image f.sub.d and generates a reconstructed light field image f.sub.d{tilde over ()}, storing it into the memory unit 1140.
(38) At this point, the CPU 1110 may dispose of the data from the memory which are not required anymore at the decoder side.
(39) Finally, the CPU 1110 fetches from memory 1140 the recovered light field image f.sub.d{tilde over ()} and sends it, by means of the video adapter 1170, to the display unit 1195.
(40) It should be noted how the encoding and decoding apparatuses described in the figures may be controlled by the CPU 1110 to internally operate in a pipelined fashion, enabling to reduce the overall time required to process each image, i.e. by performing more instructions at the same time (e.g. using more than one CPU and/or CPU core).
(41) It should also be noted than many other operations may be performed on the output data of the coding device 1005 before sending them on the channel or memorizing them on a storage unit, like modulation, channel coding (i.e. error protection).
(42) Conversely, the same inverse operations may be performed on the input data of the decoding device 1100 before effectively process them, e.g. demodulation and error correction. Those operations are irrelevant for embodying the present invention and will be therefore omitted.
(43) Besides, the block diagrams shown in
(44) The skilled person understands that these charts have no limitative meaning in the sense that functions, interrelations and signals shown therein can be arranged in many equivalents ways; for example, operations appearing to be performed by different logical blocks can be performed by any combination of hardware and software resources, being also the same resources for realizing different or all blocks.
(45) The encoding process and the decoding process will now be described in detail.
(46) Encoding
(47) In order to show how the encoding process occurs, it is assumed that the image f (or a block thereof) to be processed is preferably a color patterned raw lenselet image, where each pixel is encoded over 8 bit so that the value of said pixel can be represented by means of an integer value ranging between 0 and 255. Of course, this is only an example; images of higher color depth (e.g. 16, 24, 30, 36 or 48 bit) can be processed by the invention without any loss of generality.
(48) The image f can be obtained applying a color filter array on a square grid of photosensors (e.g. CDD sensors); a well-known color filter array is for example the Bayer filter, which is used in most single-chip digital image sensors.
(49)
(50) With also reference to
(51) With also reference to
(52) Two distinctive schemes for graph connection can be considered.
(53) The first scheme takes into account only intra-view connections when constructing a graph, where each node is connected to a predefined number K of nearest nodes in terms of Euclidean distance, i.e. the distance between available irregularly spaced pixels (e.g. 630, 640) within the same sub-view of the sequence.
(54) The second scheme takes into account both intra and inter-view correlations among the sub-views of the sequence.
(55) In order to reduce graph complexity, the sub-views sequence is divided into multiple GOPs consists of a predefined number G of sub-views.
(56)
(57) Successively, a sub-view matching for motion estimation between each sub-view and the previous reference sub-view is performed in the sequence.
(58) The optimal global motion vector can be determined for each sub-view in terms of sum of squared error (SSE), which can be evaluated considering the pixel samples of each sub-view and the previous reference sub-view.
(59) The matching is considered for the whole sub-view, instead of applying the block-based matching employed for example for the motion estimation in HEVC.
(60) Specifically, each mn sub-view is first extrapolated to the size of (m+2r)(n+2r) before motion search, where r is the motion search width.
(61) This reduces the overhead in encoding of the motion vectors. The sub-view extrapolation can be performed by employing several techniques, for example by copying the border pixel samples of each sub-view.
(62) After motion estimation, each pixel is connected to a predefined number P of nearest neighbours in terms of Euclidean distance within the same sub-view and the reference view shifted by the optimal motion vector.
(63) With also reference to
(64) A graph G=(E,V) is composed of a set of nodes vV, connected with links. For each link e.sub.i,jE, connecting nodes v.sub.i and v.sub.j, there is an associated weight of non-negative value w.sub.ij [0,1], which captures the similarity between the connected nodes.
(65) An image f can be represented as a graph where the pixels of the image correspond to the graph nodes, while the weights of the links describe the pixels similarity which can be evaluated using a predetermined non-linear function (e.g. Gaussian or Cauchy function) depending on the grayscale space distance d.sub.i,j=|f.sub.if.sub.j| between the i-th pixel f.sub.i and the j-th pixel f.sub.j of the image.
(66) In the Graph Fourier transform (GFT) technique, the graph information can be represented with a weights matrix W which elements are the weights w.sub.ij of the graph, then the corresponding Laplacian matrix can be obtained as L=D-W where D is a diagonal matrix with elements d.sub.i=.sub.kw.sub.ik. The GFT is performed by the mathematical expression {circumflex over (f)}=U.sup.Tf where U is the matrix which columns are the eigenvectors of the matrix L, and f is the raster-scanner vector representation of the image f.
(67) The coefficients {circumflex over (f)} and the weights w.sub.ij are then quantized and entropy coded. More related work known in the art describe approaches improving the GFT based coding, as shown for example by W. Hu, G. Cheung, A. Ortega, and O. C. Au in Multiresolution graph Fourier transform for compression of piecewise smooth images, published in IEEE Transactions on Image Processing.
(68) The Graph based Lifting Transform (GLT) technique is a multi-level filterbank that guarantees invertibility. At each level m, the graph nodes are first divided into two disjoint sets, a prediction set SP.sup.m and an update set SU.sup.m.
(69) The values in SU.sup.m are used to predict the values in SP.sup.m, the resulting prediction errors are stored in SP.sup.m, and are then used to update the values in SU.sup.m.
(70) The smoothed signal in SU.sup.m will serve as the input signal to level m+1, while the computation for coefficients in SP.sup.m uses only the information in SU.sup.m, and vice versa.
(71) Carrying on the process iteratively produces a multi-resolution decomposition. For video/image compression applications, the coefficients in the update set SU.sup.M of the highest-level M will be quantized and entropy coded. More related work known in the art describe approaches improving the GLT based coding, as shown for example by Y.-H. Chao, A. Ortega, and S. Yea, Graph-based lifting transform for intra-predicted video coding, published in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016).
(72) Summarizing, with also reference to
(73) Finally, the graph-coded bitstream f.sub.d{circumflex over ()} of the sub-views sequence can be transmitted and/or stored by means of the output unit 1080.
(74) Decoding
(75) With reference to
(76) The graph decoding unit 970 is configured to receive and decode the bitstream f.sub.d{circumflex over ()} of the sub-views sequence according to a predefined graph signal processing (GSP) techniques, outputting the reconstructed sub-views sequence f.sub.d (step 805).
(77) The demosaicing unit 975 preferably performs the following steps: a subaperture image reconstructing step 810 for receiving the reconstructed sub-views sequence f.sub.d and to generate the reconstructed subaperture image f.sub.d, rearranging each reconstructed sub-view in the sequence on the basis of at least one predefined order (see
(78) The optional post-processing unit 980 is configured to receive the full-color subaperture image f.sub.d and to generate a reconstructed light field image f.sub.d{tilde over ()}, using operations permitted in the light field images such as re-focusing, noise reduction, 3D view construction and modification of depth of field.
(79) Summarizing, with also reference to
(80) Finally, the reconstructed light field image f.sub.d{tilde over ()} can be outputted by means of output video unit 1170 and displayed on the display unit 1195.
(81) With reference to
(82) In order to perform the coding-encoding test, the EPFL database (M. Rerabek and T. Ebrahimi, New Light Field Image Dataset, in 8th International Conference on Quality of Multimedia Experience (QoMEX), no. EPFL-CONF-218363, 2016) was used.
(83) The subaperture image consists of 193 sub-views of size 432624.
(84) The ordinate axis denotes the average PSNR for R, G, and B color components. Compared to state-of-the-art schemes, a coding gain is achieved at the high-bitrate region.
(85) For the test, both All-intra and Low delay P configurations were used for the baseline HEVC based scheme.
(86) For Low delay P configuration in HEVC. The sub-views are arranged into pseudo-sequence in the same way as pictured in
(87) The first view in each GOP is compressed as an I-frame, and the remaining frames are coded as P-frames. For the proposed graph based approach, each node is connected to 6 nearest neighbours, and the search width r=2 for sub-view matching.
(88) The transformed coefficients are uniformly quantized and entropy coded using the Alphabet and Group Partitioning (AGP) proposed by Said and Pearlman in A new, fast, and efficient image codec based on set partitioning in hierarchical trees, published in IEEE Transactions on circuits and systems for video technology, vol. 6, no. 3, pp. 243-250, 1996. In order to evaluate the reconstructed lenselet image, using graph based coding, the reconstructed lenselet image is demosaiced and converted to the colored subaperture image in a same way as proposed by D. G. Dansereau, O. Pizarro, and S. B. Williams, in Decoding, calibration and rectification for lenselet-based plenoptic cameras, published in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013.
(89) In the baseline method, the reconstructed YUV 4:2:0 sequences are converted to RGB 4:4:4, where the upsampling for U and V components is based on nearest neighbour. Concluding, the obtained results show that the method described in the present invention can outperform the state-of-the-art schemes like a HEVC-based approach.
(90) In an alternative embodiment of the invention, the patterned raw lenselet image f can be generated by employing other color filter arrays placed on a square grid of photosensors, besides the well-known Bayer filter.
(91) In another embodiment of the invention, the patterned raw lenselet image f can be generated by capturing other combinations of color components, for example RGBY (red, green, blue, yellow) instead of RGB.
(92) In other embodiments, the invention is integrated in a video coding technique wherein also the temporal correlation between different light field images is taken into account. To that end, a prediction mechanism similar to those used in the conventional video compression standards can be used in combination with the invention for effectively compressing and decompressing a video signal.
(93) In other embodiments, the encoding and decoding stages described in the present invention can be performed employing other graph signal processing (GSP) techniques instead of the Graph Fourier transform (GFT), or the Graph based Lifting Transform (GLT).
(94) In other embodiments, the graph signal processing (GSP) technique employed at the encoding and decoding stages can be signalled from the encoder apparatus to the decoder apparatus. Alternatively, the GSP technique employed by both the encoder and decoder is defined in a technical standard.
(95) The present description has tackled some of the possible variants, but it will be apparent to the man skilled in the art that other embodiments may also be implemented, wherein some elements may be replaced with other technically equivalent elements. The present invention is not therefore limited to the explanatory examples described herein, but may be subject to many modifications, improvements or replacements of equivalent parts and elements without departing from the basic inventive idea, as set out in the following claims.