Iterative synthesis of views from data of a multi-view video
20230053005 · 2023-02-16
Inventors
Cpc classification
H04N13/111
ELECTRICITY
H04N13/282
ELECTRICITY
H04N13/349
ELECTRICITY
International classification
H04N13/111
ELECTRICITY
H04N13/349
ELECTRICITY
Abstract
Synthesis of an image of a view from data of a multi-view video. The synthesis includes an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video; calculating an image of a synthesised view from the generated synthesis data and at least one image of a view of the multi-view video; analysing the image of the synthesised view relative to a synthesis performance criterion; if the criterion is met, delivering the image of the synthesised view; and if not, iterating the processing phase. The calculation of an image of a synthesised view at a current iteration includes modifying, based on synthesis data generated in the current iteration, an image of the synthesised view calculated during a processing phase preceding the current iteration.
Claims
1. A method for synthesizing an image of a view from data of a multi-view video, performed in an image synthesis device, the method comprising an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video, computing an image of a synthesized view from said generated synthesis data and from at least one image of a view of the multi-view video, analyzing the image of said synthesized view with respect to a synthesis performance criterion, if said criterion is met, releasing the image of said synthesized view, otherwise, iterating said processing phase, the computing an image of a synthesized view in a current iteration comprising modifying, on the basis of the synthesis data generated in the current iteration, an image of the synthesized view computed during a processing phase preceding the current iteration.
2. The method as claimed in claim 1, wherein, in the current iteration, the image of the view that is used to generate the synthesis data is selected according to the result of the analysis of the image of the modified synthesized view during a processing phase preceding the current iteration.
3. The method as claimed in claim 1, wherein, in the current iteration, the generation of the synthesis data uses the synthesis data generated during a processing phase preceding the current iteration.
4. The method as claimed in claim 3, wherein, in the current iteration, the generation of synthesis data comprises modification of the synthesis data generated during a processing phase preceding the current iteration on the basis of texture data used in the current iteration.
5. The method as claimed in claim 3, wherein the synthesis data generated in the current iteration are combined with synthesis data generated during a processing phase preceding the current iteration.
6. The method as claimed in claim 1, wherein the synthesis data generated during the processing phase belong to the group consisting of: depth data associated with the texture data of the image of the view of the multi-view video that are used in said processing phase, contour data associated with the texture data of the image of the view of the multi-view video that are used in said processing phase, a visual element of the image of the view of the multi-view video that is used in said processing phase, said visual element not varying from one image of a view of the multi-view video to another, statistical data computed for texture data of the image of the view of the multi-view video that are used in said processing phase.
7. The method as claimed in claim 1, wherein the synthesis data generated during the processing phase are associated with an uncertainty value for the correlation of said synthesis data with the corresponding texture data of said at least one image of said view that were used to generate said synthesis data.
8. The method as claimed in claim 1, wherein, during the processing phase, the generation of synthesis data uses at least one depth image of a view of the multi-view video and/or at least one piece of information associated with the multi-view video.
9. The method as claimed in claim 1, wherein, in the current iteration, the modification of the image of the synthesized view computed in the previous iteration uses synthesis data generated previously in the current iteration.
10. The method as claimed in claim 1, wherein, in the current iteration, the modification of the image of the synthesized view computed in the previous iteration comprises computing an image of a synthesized view from said synthesis data generated in the current iteration and from the image of said synthesized view computed in said previous iteration.
11. The method as claimed in claim 1, wherein, in the current iteration, the modification of the image of the synthesized view computed in the previous iteration comprises the following: computing an image of a synthesized view from said synthesis data generated in the current iteration and from the image of a view of the multi-view video, combining the image of said computed synthesized view with the image of the synthesized view computed in said previous iteration.
12. The method as claimed in claim 1, wherein the current iteration is performed in response to a message from the synthesis device, said message containing: either information about the location of one or more areas/one or more pixels of the image of the synthesized view computed previously in the current iteration that do not meet the synthesis performance criterion, or a percentage of non-synthesized pixels of the image of the view synthesized previously in the current iteration.
13. The method as claimed in claim 1, wherein the current iteration is furthermore performed in response to reception of uncertainty values associated with the synthesis data generated in the previous iteration.
14. A device for synthesizing an image of a view from data of a multi-view video, said device comprising: a processor configured to perform an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video, computing an image of a synthesized view from said generated synthesis data and from at least one image of a view of the multi-view video, analyzing the image of said synthesized view with respect to a synthesis performance criterion, if said criterion is met, releasing the image of said synthesized view, otherwise, iterating the processing phase, the computing of the image of the synthesized view in a current iteration comprising modifying, on the basis of synthesis data generated in the current iteration, an image of the synthesized view computed during a processing phase preceding the current iteration.
15. The method according to claim 1, further comprising: decoding images of multiple encoded views, producing a set of images of multiple decoded views, and performing at least one iteration of the image processing phase on the image of a view from said set of images.
16. The method as claimed in claim 15, wherein the synthesis data are generated from texture data of at least one of the decoded images of said set.
17. The method as claimed in claim 15, wherein the synthesis data, being previously encoded, are generated as follows: reading the encoded synthesis data in said data signal or in another data signal, decoding the read synthesis data.
18. The device according to claim 14, wherein the processor is further configured to perform: decoding an encoded data signal of the multi-view video, said decoding comprising: decoding images of multiple encoded views, producing a set of images of multiple decoded views, synthesizing the image of the view from data of the multi-view video by performing at least one iteration of the image processing phase.
19. (canceled)
20. A non-transitory computer-readable information medium containing instructions of a computer program stored thereon, comprising program code instructions which when executed by a processor of a synthesis device, configure the synthesis device to implement a method of synthesizing an image of a view from data of a multi-view video, performed in an image synthesis device, the method comprising an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video, computing an image of a synthesized view from said generated synthesis data and from at least one image of a view of the multi-view video, analyzing the image of said synthesized view with respect to a synthesis performance criterion, if said criterion is met, releasing the image of said synthesized view, otherwise, iterating said processing phase, the computing an image of a synthesized view in a current iteration comprising modifying, on the basis of the synthesis data generated in the current iteration, an image of the synthesized view computed during a processing phase preceding the current iteration.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0085] Other features and advantages will become apparent from reading particular embodiments of the invention, which are given by way of illustrative and non-limiting examples, and the appended drawings, in which:
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
[0093]
[0094]
[0095]
[0096]
[0097]
[0098]
DETAILED DESCRIPTION OF ONE EMBODIMENT OF THE INVENTION
[0099] Referring to
[0100]
[0107] Such images are images which have been reconstructed by a decoder, prior to the performance of the iterative synthesis method.
[0108] This method is carried out by an image synthesis device SYNT, a simplified representation of which is shown in
[0109] According to one particular embodiment of the invention, the actions accomplished by the synthesis method are performed by computer program instructions. To this end, the synthesis device SYNT has the conventional architecture of a computer and comprises in particular a memory MEM_S, a processing unit UT_S, equipped for example with a processor PROC_S, and controlled by a computer program PG_S stored in memory MEM_S. The computer program PG_S comprises instructions for performing the actions of the synthesis method as described above when the program is executed by the processor PROC_S.
[0110] On initialization, the code instructions of the computer program PG_S are for example loaded into a RAM memory (not shown) before being executed by the processor PROC_S. The processor PROC_S of the processing unit UT_S performs in particular the actions of the synthesis method described below, according to the instructions of the computer program PG_S.
[0111] Such a synthesis device SYNT may be arranged, as shown in
[0112] According to a particular embodiment of the invention, the actions accomplished by the decoder DEC are performed by computer program instructions. To this end, the decoder DEC has the conventional architecture of a computer and comprises in particular a memory MEM_D, a processing unit UT_D, equipped for example with a processor PROC_D, and controlled by a computer program PG_D stored in memory MEM_D. The computer program PG_D comprises instructions for performing the actions of the decoding method as described below when the program is executed by the processor PROC_D.
[0113] On initialization, the code instructions of the computer program PG_D are for example loaded into a RAM memory (not shown) before being executed by the processor PROC_D. The processor PROC_D of the processing unit UT_D performs in particular the actions of the decoding method described below, according to the instructions of the computer program PG_D.
[0114] According to the invention, referring to
[0118] The synthesis device SYNT according to the invention is thus advantageously configured to carry out image synthesis, in which synthesis data are estimated jointly and interactively with the computing of an image of a synthesized view, the synthesis being iterated as needed in order to modify/complete/refine the synthesized image obtained, owing to the communication existing between the module M3 and the module M1.
[0119] As shown in
[0120]
[0127] Such images are images which have been reconstructed by the decoder DEC of
[0128] The image processing phase P0 comprises the following.
[0129] In S1.sub.0, image synthesis data DS.sub.0 are generated by the module M1 of
[0130] According to a first embodiment shown in
[0133] According to a second embodiment shown in
[0134] According to the invention, the synthesis data DS.sub.0 are generated from texture data T.sub.i which correspond to only certain pixels of the texture component CT.sub.i of the previously reconstructed image IV.sub.i. The computation resources for performing the synthesis data generation step are therefore particularly lightened compared to the depth estimates performed in the prior art, which consider all the pixels of an image. Preferably, the synthesis data DS.sub.0 generated are depth data associated with the texture data T.sub.i. However, other types of synthesis data DS.sub.0 can of course be envisaged.
[0135] Examples of synthesis data DS.sub.0 may be, non-exhaustively: [0136] partial depth maps, i.e. images for which some pixels have an associated depth value and others have a reserved value indicating that no depth value could be associated with these pixels; [0137] contour maps or angles present in an image: the structural information contained in the contour maps, the angles and the active contours can be used by the iterative image synthesis method according to the invention, for example in order to avoid ghosting artifacts. This can be achieved by improving the synthesis algorithm or by improving the depth maps. Edge detection approaches may include the use of Sobel-, Canny-, Prewitt- or Roberts-type operators. Angles can be estimated using a Harris-type angle detection operator; [0138] features extracted by SIFT—(Scale-Invariant Feature Transform) or SURF—(Speeded-Up Robust Features) type algorithms. Such algorithms are used for the estimation of homographies, fundamental matrices and image matching. The features extracted by these methods therefore share similar characteristics with depth maps, implying a relationship between the images. The SURF algorithm is an extension of the SIFT algorithm, replacing the Gaussian filter in SIFT with a mean filter; [0139] statistical characteristics computed for one or more textures (local or on the complete image), histogram data, etc.
[0140] Such statistical characteristics are for example representative of the percentage of texture, or contour, in given areas of the image of a view. They are used by the synthesis device SYNT in order to synthesize an image with a percentage of texture or contour that comes close to this. The same applies to the histogram data relating to different areas of the image of a view, the synthesis device SYNT trying to preserve these histogram data for each of these areas for the image of the view synthesized by said device.
[0141] Machine learning methods, such as convolutional neural networks (CNNs), can also be used to extract features useful for image synthesis from reconstructed texture data.
[0142] These different synthesis data can, during step S1.sub.0, be used individually or in combination.
[0143] The data DS.sub.0 generated at the end of step S1.sub.0 and more generally the data generated in each iteration will subsequently be called synthesis data. Such an expression covers not only partial or non-partial depth maps, but also the aforementioned data. Other types of data not mentioned here are also possible.
[0144] Optionally, and therefore as shown in dotted lines in
[0145] In the event of the synthesis data DS.sub.0 being depth data: [0146] if these depth data are generated from texture data of a single image of one view among N, the depth estimation is performed for example using a monocular neural network, a contour-based depth estimation algorithm or T-junctions, etc.; [0147] if these depth data are generated from texture data of at least two images among N, the depth estimation is performed for example using a stereo matching algorithm, a pattern-based matching algorithm (template matching), etc.
[0148] Optionally, and for this reason as shown in dotted lines in
[0151] According to one embodiment, the generation of the synthesis data is performed on the basis of a threshold which is for example a SAD (“Sum of Absolute Differences”) distortion value, which is representative of the quality of the stereo matching, for example, or more generally on the basis of any other indicator allowing a degree of confidence in each pixel to be established for the synthesis data item to be generated for this pixel.
[0152] According to a preferred embodiment, the map of synthesis data is filled in only for pixels whose confidence is above a given threshold.
[0153] According to another embodiment, the map of synthesis data is filled in for all the pixels of the image, but is accompanied either by a confidence map at each point or by the SAD value, which needs to be compared with the threshold, at each point. According to yet another embodiment, the map of synthesis data is incomplete, but is nevertheless accompanied either by a confidence map at each existing point or by the SAD value, which needs to be compared with the threshold, at each existing point.
[0154] The iterative synthesis method according to the invention continues with the computing module M2 of
[0155] In S3.sub.0, the synthesized image IS.sub.0 is analyzed with respect to a synthesis performance criterion CS by means of the analysis module M3 of
[0156] Different synthesis performance criteria are possible at this stage. These criteria are, as non-exhaustive examples: [0157] CS1: number of missing pixels in the synthesized image IS.sub.0 below a threshold; and/or [0158] CS2: size of the largest area to be filled in the synthesized image IS.sub.0 below a threshold, such a criterion signifying that an inpainting algorithm will succeed in reconstructing the missing data of this image efficiently; and/or [0159] CS3: visual quality of the synthesized image ISo deemed sufficient, as measured for example by an objective evaluation algorithm with or without reference (“full-reference, reduced-reference, no-reference objective metric”).
[0160] If the criterion is met, the synthesized image ISo is considered valid and is released for storage while waiting to be displayed by a terminal of a user who has requested this image and/or to be directly displayed by the terminal.
[0161] If the criterion is not met, in S4.sub.0, the analysis module M3 transmits to the module M1 information INF.sub.0 about the pixels/pixel areas of the synthesized image IS.sub.0 which have not been synthesized or which have not been considered as correctly synthesized at the end of the image processing phase P0. Such information INF.sub.0 is contained in a dedicated message or request. More specifically, the information INF.sub.0 is shown in the form of the coordinates of these pixels that are not or poorly synthesized according to the aforementioned criteria or even in the form of a percentage of non-synthesized pixels. Optionally, in S4.sub.0 the analysis module M3 also transmits the occupancy map OM.sub.i or the confidence map CM.sub.i that were generated in S1.sub.0.
[0162] The synthesis device SYNT then triggers a new image processing phase P1, in order to improve/refine the synthesized image IS.sub.0 which was obtained previously.
[0163] To this end, in S1.sub.1, the synthesis data generation module M1 again generates synthesis data DS.sub.1 from texture data (pixels) of at least one of the images IV.sub.1 to IV.sub.N.
[0164] These may be for example: [0165] texture data T.sub.i used in S1.sub.0, and/or [0166] texture data of the texture component CT.sub.i of the same image IV.sub.i, but different from the texture data T.sub.i, and/or [0167] texture data of another image IV.sub.k among N (1≤k≤N).
[0168] As in S1.sub.0, the synthesis data DS.sub.1 can also be generated from at least one piece of information MD associated with the sequence of multi-view images IV.sub.1 to IV.sub.N and/or from one or more available depth maps CP.sub.1 to CP.sub.N that are respectively associated with the texture components CT.sub.1 to CT.sub.N of the images IV.sub.1 to IV.sub.N. They may also be depth data, partial depth maps, contour maps, etc., used individually or in combination, the choice of the type of these data not necessarily being the same as that used in S1.sub.0.
[0169] A map of synthesis data DS.sub.1 generated for these texture data is obtained at the end of step S1.sub.1. Such a map is shown in
[0170] The selection of the image(s) to generate the synthesis data DS.sub.1 can be performed in different ways.
[0171] If for example the two texture components CT.sub.1 and CT.sub.2 were used to generate the synthesis data DS.sub.0 in S1.sub.0, the synthesis data generation module M1 can be configured to select the two respective texture components CT.sub.3 and CT.sub.4 of the images IV.sub.3 and IV.sub.4 closest to the images IV.sub.1 and IV.sub.2 in S1.sub.1.
[0172] According to another example, in view of the fact that each of the images IV.sub.1 to IV.sub.N has different captured information from the scene, if the module M1 has used the texture components CT.sub.1 and CT.sub.2 to generate the synthesis data DS.sub.0 in S1.sub.0, it can be configured to use the texture components CT.sub.1 and CT.sub.3 or else CT.sub.2 and CT.sub.3 in S1.sub.1 to generate the synthesis data DS.sub.1.
[0173] According to yet another example, the module M1 can simultaneously use multiple texture components, for example four texture components CT.sub.3, CT.sub.4, CT.sub.5, CT.sub.6.
[0174] According to yet another example, in addition to or instead of the texture components CT.sub.1 to CT.sub.N, the module M1 can use one or more previously synthesized images. For example, in S1.sub.1, the module M1 can use the synthesized image IS.sub.0 to generate the synthesis data DS.sub.1.
[0175] According to yet another example, the choice of texture data T.sub.i and/or T′.sub.i and/or T.sub.k in S1.sub.1 is conditional on the information INF.sub.0. If for example the information INF.sub.0 is representative of the location of one or more occlusion areas in the synthesized image IS.sub.0, the module M1 is configured to select in S1.sub.1 one or more image texture components that are further away than that/those used in S1.sub.0 to fill these areas.
[0176] Such a selection can be guided by the information MD corresponding for example to camera parameters, such as for example the angle of the camera and its position in the scene.
[0177] When the selection of the texture data T.sub.i and/or T′.sub.i and/or T.sub.k is conditional on the information INF.sub.0, a table Table 1 is shown below, merely by way of example, which associates with each synthesis performance criterion CS1 to CS3 not satisfied the content of the corresponding information INF.sub.0 and the action to be performed in S1.sub.1 by the synthesis data generation module M1.
TABLE-US-00001 TABLE 1 Synthesis performance criterion not OK INF.sub.0 Action M1 CS1 to CS3 Number/percentage of missing Select synthesis data pixels in the synthesized image associated with missing IS.sub.0 above a threshold pixels
[0178] According to an embodiment shown in
[0179] According to another embodiment shown in
[0180] According to another embodiment shown in
[0181] To merge the synthesis data DS.sub.0 and DS.sup.int.sub.1, various embodiments can be implemented.
[0182] According to a first embodiment, the missing synthesis data in a map of synthesis data, for example the map containing the synthesis data DS.sup.int.sub.1, are retrieved among the existing synthesis data DS.sub.0 of the other map, or vice versa.
[0183] According to a second embodiment, if some among the synthesis data DS.sub.1 were already present among the synthesis data DS.sub.0 and, in the map of synthesis data DS.sub.1, they are each associated with a higher degree of confidence than that associated with them in the map of synthesis data DS.sub.0, these synthesis data DS.sub.0 are replaced by the corresponding synthesis data DS.sub.1 obtained in S1.sub.1. For example, the SAD value can be computed by way of a stereo matching algorithm that carries out matching between the image IV.sub.i or IV.sub.k used in S1.sub.1 to generate the synthesis data and another image among N. The vectors found in order to match these two images come from a motion estimation which returns a level of correlation SAD, which is considered an example of a degree of confidence.
[0184] According to a third embodiment, if some among the synthesis data DS.sub.1 were already present among the synthesis data DS.sub.0, and these synthesis data DS.sub.1 are depth values associated with elements situated in the foreground of the scene, for example, whereas the synthesis data DS.sub.0 already present are depth values associated with elements situated in the background of the scene, for example, these synthesis data DS.sub.0 are replaced by these synthesis data DS.sub.1 so as to preserve the information of a priority depth plane (in this example the foreground of the scene).
[0185] The image processing phase P1 continues with the computing module M2 of
[0186] According to a first embodiment of the synthesis S2.sub.1 shown in
[0187] According to a second embodiment shown in
[0188] There are various ways to combine the synthesized images IS.sup.int.sub.1 and IS.sub.0.
[0189] According to a first embodiment, the synthesized image IS.sub.1 is obtained by retrieving the missing pixels in a synthesized image among the images IS.sup.int.sub.1 and IS.sub.0, for example the image IS.sup.int.sub.1, among the existing pixels in the image IS.sub.0.
[0190] According to a second embodiment, if synthesized pixels in the image IS.sup.int.sub.1 are also present in the image IS.sub.0, in the same position, the synthesis image IS.sub.1 is obtained by selecting among the images IS.sup.int.sub.1 and IS.sub.0 the common synthesized pixel that is associated with the highest degree of confidence using the aforementioned confidence map CM.
[0191] According to a third embodiment, if synthesized pixels in the image IS.sup.int.sub.1 are also present in the image IS.sub.0 in the same position, the synthesis image IS.sub.1 is obtained by selecting among the images IS.sup.int.sub.1 and IS.sub.0 the common synthesized pixel that is associated with a priority depth plane, for example the foreground of the scene. The image processing phase P1 continues with an analysis, in S3.sub.1, of the synthesized image IS.sub.1 with respect to a synthesis performance criterion CS by means of the analysis module M3 of
[0192] The performance criteria used in the image processing phase P1 may be the same as the aforementioned criteria CS1 to CS3. Two other criteria below may also be considered non-exhaustively: [0193] CS4: quality of the temporal consistency between the synthesized image IS.sub.1 and the synthesized image IS.sub.0 considered sufficient; and/or [0194] CS5: the value of some synthesized pixels in the synthesized image IS.sub.1 is equal or substantially equal to that of some synthesized pixels in the same respective position in the synthesized image IS.sub.0, leading to two similar synthesized images.
[0195] If the criterion/criteria CS1 to CS5 is/are met, the synthesized image IS.sub.1 is considered valid and is released for storage while waiting to be displayed by a terminal of a user who has requested this image and/or to be directly displayed by the terminal.
[0196] If the criterion/criteria CS1 to CS5 is/are not met, the analysis module M3 transmits a piece of information INF.sub.1 to the module M1 in S4.sub.1. If the analysis S3.sub.1 of the synthesized image IS.sub.1 uses the synthesis performance criterion CS4, the information INF.sub.1 contains the coordinates of the pixels of the area(s) of the synthesized image IS.sub.1 that are considered to lack temporal consistency with the pixels in corresponding positions in the synthesized image IS.sub.0. If the analysis S3.sub.1 of the synthesized image IS.sub.1 uses the synthesis performance criterion CS5, the information INF.sub.1 contains the coordinates of the pixels of the area(s) of the synthesized image IS.sub.1 that are considered to lack pixel consistency with the pixels in corresponding positions in the synthesized image IS.sub.0.
[0197] The synthesis device SYNT then triggers a new image processing phase P2 similar to the image processing phase P1 so as to further improve/refine/complete the synthesized image IS.sub.1 which was obtained previously.
[0198] It should be noted that, on the basis of the image processing phase P2, if the selection of such and such texture data is conditional on the information INF.sub.1, the aforementioned criteria CS4 and CS5 can be considered for generating new synthesis data DS.sub.2. To this end, a table Table 2 is shown below, merely by way of example, which associates with each synthesis performance criterion CS1 to CS5 not satisfied the content of the corresponding information INF.sub.1 and the action to be performed in P2 by the synthesis data generation module M1.
TABLE-US-00002 TABLE 2 Synthesis performance criterion not OK INF.sub.0 Action M1 CS1 to CS3 Number/percentage of Select synthesis data missing pixels in the associated with missing synthesized image IS.sub.0 pixels below a threshold CS4 Quality of the temporal For the area of the consistency between the synthesized image which does synthesized image IS.sub.0 not meet CS4, select new and the synthesized image synthesis data for this area IS.sub.1 considered which allow a synthesized insufficient image having temporal consistency with the synthesized image obtained in the previous image processing phase to be obtained CS5 Inconsistency in terms of For the area of the pixel value between pixels synthesized image which does of the synthesized image not meet CS5, select new IS.sub.1 and pixels in the same synthesis data for this area respective positions in the which allow a synthesized synthesized image IS.sub.0 image having consistency in terms of pixel value with the synthesized image obtained in the previous image processing phase to be obtained
[0199] In accordance with the invention, an image processing phase can be iterated as long as one or more synthesis performance criterion/criteria is/are not met.
[0200] Referring to
[0205] As in S1.sub.0 and in S1.sub.1, the synthesis data DS.sub.c can also be generated from at least one piece of information MD associated with the sequence of multi-view images IV.sub.1 to IV.sub.N and/or from one or more available depth maps CP.sub.1 to CP.sub.N that are respectively associated with the texture components CT.sub.1 to CT.sub.N of the images IV.sub.1 to IV.sub.N. They may also be depth data, partial depth maps, contour maps, etc., used individually or in combination, the choice of the type of these data not necessarily being the same as that used in the previous synthesis data generation steps S1.sub.0, S1.sub.1, etc.
[0206] Selection of the image(s) for generating the synthesis data DS.sub.c is performed in the same way as in the aforementioned embodiments.
[0207] In particular, the synthesis data generation module M1 can be configured to select, in S1.sub.c, one or more texture components of the images IV.sub.1 to IV.sub.N which, depending on the result of the analysis of the synthesized images performed in the previous image processing phases, may be closer or further away from the images already used in the previous image processing phases.
[0208] According to yet another example, in addition to or instead of the texture components CT.sub.1 to CT.sub.N, the module M1 can use one or more images IS.sub.0, IS.sub.1, . . . already synthesized during the previous processing phases.
[0209] As already explained above, the choice of texture data in S1.sub.c can be conditional on the information INF.sub.c−1, this choice also being able to be complemented by the information INF.sub.0, INF.sub.1, . . . generated at the end of the synthesis image computing steps IS.sub.0, IS.sub.1, . . . , respectively. The selection of the new texture data will be based on the criteria of the table Table 2 mentioned above by considering the image I.sub.c−1 synthesized during the previous image processing phase Pc−1 in the light of at least one of the other images IS.sub.0, IS.sub.1, . . . already synthesized during the previous processing phases.
[0210] As already explained when describing step S1.sub.1, the synthesis data DS.sub.c can be generated in S1.sub.c by modifying, using the texture data T.sub.i and/or T′.sub.i and/or T.sub.k and/or T.sub.m used as input for the module M1, all or some of the synthesis data DS.sub.c−1 generated in the previous image processing phase P.sub.c−1, but also the synthesis data DS.sub.0, DS.sub.1, . . . before the phase P.sub.c−1.
[0211] As already explained above when describing step S1.sub.0, according to one embodiment, generation S1.sub.c of the synthesis data can be performed on the basis of a threshold which is a SAD distortion value, for example. The SAD distortion value can be the same in each iteration. However, such a SAD value may vary. In the first image processing phases P0, P1, P2, the required SAD value is for example the lowest possible and is increased in the subsequent processing phases, for example from the current image processing phase Pc. According to another example, the SAD value can be gradually increased in each iteration of an image processing phase. Similarly to the aforementioned image processing phase P1, the current image processing phase Pc continues with the computing module M2 of
[0212] As already explained when describing step S2.sub.1, according to a first embodiment, the computing module M2 modifies the synthesized image IS.sub.c−1 obtained during the previous image processing phase P.sub.c−1 at least on the basis of the synthesis data DS.sub.c generated in S1.sub.c. At least one image among the images IV.sub.1 to IV.sub.N can also be used for this modification. According to a second embodiment, the computing module M2 computes an intermediate synthesized image IS.sup.int.sub.c from the synthesis data DS.sub.c obtained in S1.sub.c and optionally from one or more images IV.sub.1 to IV.sub.N, and then combines the intermediate synthesized image IS.sup.int.sub.c with the synthesized image IS.sub.c−1 computed previously in the previous image processing phase Pc−1 and optionally with one or more images IS.sub.0, IS.sub.1, . . . already synthesized before the previous image processing phase Pc−1.
[0213] The image processing phase Pc continues with an analysis, in S3.sub.c, of the synthesized image IS.sub.c with respect to a synthesis performance criterion CS by means of the analysis module M3 of
[0214] The performance criteria used in the image processing phase Pc are the aforementioned criteria CS1 to CS5. If one or more of these criteria is/are met, the synthesized image IS.sub.c is considered valid and is released for storage while waiting to be displayed by a terminal of a user who has requested this image and/or to be directly displayed by the terminal. If one or more of these criteria is/are not met, the analysis module M3 transmits a piece of information INF.sub.c to the module M1 in S4.sub.c. A new image processing phase Pc+1 is then triggered.