Device and method for generating and rendering enriched multimedia content

11706478 · 2023-07-18

Assignee

Inventors

Cpc classification

International classification

Abstract

A method of enrichment of at least one source multimedia content for rendering on a device is disclosed. A rendering zone has a width/height ratio different from the width/height ratio of the source multimedia content. In one aspect, the method comprises identifying, from the source multimedia content, of at least one enrichment content intended to be rendered in a region of the rendering zone distinct from a region for rendering the source multimedia content. And these enriching the source multimedia content, taking into account of the at least one identified enrichment content and delivering at least one enriched multimedia content comprising at least one piece of enrichment data representative of a rendering of the enrichment content in a region of the rendering zone distinct from a region for rendering the source multimedia content.

Claims

1. A method of enrichment of at least one source multimedia content for rendering, on a rendering device, in a rendering zone having a width/height ratio different from a width/height ratio of the source multimedia content, the method comprising: identifying, from the source multimedia content, of at least one enrichment content intended to be rendered in a region of the rendering zone distinct from a region for rendering the source multimedia content, and obtaining at least one enriched multimedia content comprising the source multimedia content and at least one piece of enrichment data, in the form of metadata, comprising at least one rendering parameter to be applied when rendering the identified enrichment content in a region of the rendering zone distinct from a region for rendering the source multimedia content.

2. The method according to claim 1, further comprising: processing of the at least one identified enrichment content, delivering the at least one piece of enrichment data, and obtaining the enriched multimedia content comprises enriching the source multimedia content by inserting at least piece of enrichment data in the source multimedia content to deliver the at least one enriched multimedia content.

3. The method according to claim 2, the identifying at least one enrichment content comprises analyzing the source multimedia content according to at least one predetermined criterion.

4. The method according to claim 3, wherein the at least one identified enrichment content corresponds to a portion of the source multimedia content.

5. The method according to claim 4, wherein the identifying at least one enrichment content comprises performing oculometry of a user during a rendering of the source multimedia content.

6. The method according to claim 2, further comprising, prior to the identifying at least one enrichment content, detecting the source multimedia of at least one piece of data representing the width/height ratio of the source multimedia content.

7. The method according to claim 6, further comprising: in response to detection of the difference, processing the at least one piece of enrichment data, delivering the enrichment content, and rendering the source multimedia content and the enrichment content respectively in two distinct regions of the rendering zone.

8. The method according to claim 6, further comprising: in response to detection of no difference, detecting the at least one piece of enrichment data, and rendering of an indicator of presence of enrichment content.

9. The method according to claim 1, wherein the identifying at least one enrichment content comprises analyzing the source multimedia content according to at least one predetermined criterion.

10. The method according to claim 1, wherein the at least one identified enrichment content corresponds to a portion of the source multimedia content.

11. The method according to claim 1, wherein the identifying at least one enrichment content comprises selecting at least one enrichment content distinct from the source multimedia content.

12. The method according to claim 1, wherein the identifying at least one enrichment content comprises performing oculometry of a user during a rendering of the source multimedia content.

13. The method according to claim 1, wherein the at least one piece of enrichment data comprises at least one spatial coordinate representative of the enrichment content.

14. The method according to claim 1, further comprising, prior to the identifying at least one enrichment content, detecting the source multimedia of at least one piece of data representing the width/height ratio of the source multimedia content.

15. A method for rendering at least one enriched multimedia content in a rendering zone of a rendering device, wherein the at least enriched multimedia content comprises at least one source multimedia content and at least one piece of enrichment data, in the form of metadata, comprising at least one rendering parameter to be applied when rendering the identified enrichment content in a region of the rendering zone distinct from a region for rendering the source multimedia content, and the method comprises: detecting a difference between a width/height ratio of the rendering zone and a width/height ratio of the source multimedia content, and in response to detection of the difference: processing the at least one piece of enrichment data, delivering the enrichment content, and rendering the source multimedia content and the enrichment content respectively in two distinct regions of the rendering zone.

16. The method according to claim 15, wherein, in response to detection of no difference, detecting the at least one piece of enrichment data, and rendering of an indicator of presence of enrichment content.

17. An enrichment device comprising at least one processor configured to enrich at least one source multimedia content for rendering, on a rendering device in a rendering zone having a width/height ratio different from a width/height ratio of the source multimedia content, wherein the at least one processor is configured to: identify, from the source multimedia content, at least one enrichment content intended to be rendered in a region of the rendering zone distinct from a region for rendering the source multimedia content, and obtain at least one enriched multimedia content comprising the at least one source multimedia content and at least one piece of enrichment data, in the form of metadata, comprising at least one rendering parameter to be applied when rendering the identified enrichment content in a region of the rendering zone distinct from a region for rendering the source multimedia content.

18. A rendering device comprising at least one processor configured to render at least one enriched multimedia content in a rendering zone of a rendering device, wherein the at least one enriched multimedia content comprises at least one source multimedia content and at least one piece of enrichment data, in the form of metadata, comprising at least one rendering parameter to be applied when rendering the identified enrichment content in a region of the rendering zone distinct from a region for rendering the source multimedia content, and the at least one processor being configured to: detect a difference between a width/height ratio of the rendering zone and a width/height ratio of the source multimedia content; in response to detection of the difference: process the at least one piece of enrichment data, deliver the enrichment content, and render the source multimedia content and the enrichment content respectively in two distinct regions of the rendering zone.

19. A storage medium comprising program code instructions that, when executed by a processor, cause the processor to implement the method according to claim 1.

20. A storage medium comprising program code instructions that, when executed by a processor, cause the processor to implement the method according to claim 15.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) Other features and advantages of the disclosed technology shall appear more clearly from the following description, given by way of a simple illustratory and non-exhaustive example, with reference to the figures, of which

(2) FIG. 1 presents an example of display in landscape mode of a multimedia content captured in portrait mode, according to a prior-art disclosed technology;

(3) FIG. 2 illustrates, in flowchart form, the different steps of the method of enrichment a source multimedia content according to one embodiment of the disclosed technology;

(4) FIG. 3 illustrates an example of identification of an enrichment content according to one embodiment of the disclosed technology;

(5) FIG. 4 illustrates, in flowchart form, the different steps of the method for rendering an enriched multimedia content according to one embodiment of the disclosed technology;

(6) FIG. 5a illustrates an example of rendering of an enriched content according to a first variant of the disclosed technology;

(7) FIG. 5b illustrates an example of rendering of an enriched content according to a second variant of the disclosed technology;

(8) FIG. 5c illustrates an example of rendering of an enriched content according to a third variant of the disclosed technology;

(9) FIG. 6 illustrates an example of rendering of an enriched content according to another embodiment of the disclosed technology;

(10) FIG. 7 presents the hardware structure of a device for enriching a source multimedia content according to one embodiment of the disclosed technology;

(11) FIG. 8 presents the hardware structure of a disclosed technology for rendering an enriched multimedia content according to one embodiment of the disclosed technology.

DETAILED DESCRIPTION OF CERTAIN ILLUSTRATIVE EMBODIMENTS

(12) The general principle of the disclosed technology relies on the identification, from a source multimedia content, of an enrichment content to be rendered in addition to/besides the source multimedia content when the rendering zone (for example a communications terminal screen) has a width/height ratio different from that of the source multimedia content.

(13) In addition, the identified enrichment content, which corresponds for example to an extract from the source multimedia content, is not added directly to this source multimedia content to obtain an enriched content, so as not to increase the size/weight (in bytes) of the enriched content (for its storage or its transmission), but is processed in order to obtain one piece of enrichment data which, for its part, will be added to the source multimedia content to obtain an enriched multimedia content.

(14) According to another aspect, the disclosed technology therefore relates to the rendering of such an enriched content, by detection, reading and processing of the enrichment data in order to retrieve the enrichment content to be rendered in addition/besides the source multimedia content, when a difference is detected between a width/height ratio of the rendering zone and a width/height ratio of the source multimedia content.

(15) According to a secondary aspect, the disclosed technology also provides for the display of an indicator of presence of the enriched content (for example in the form of an icon) when, on the contrary, there is no difference detected between a width/height ratio of the rendering zone and a width/height ratio of the source multimedia content. In this way, at the time of the rendering, the user knows that his experience would be improved as compared with the prior-art disclosed technologys if he changes the orientation of the rendering zone and that this zone will then no longer have the same width/height ratio as that of the source multimedia content.

(16) Referring now to FIG. 2, a more detailed view is presented of the steps of enrichment of a source multimedia content CS for its rendering, on a rendering device, in a rendering zone having a width/height ratio different from that of the source multimedia content. For example, this case arises when a video, or a photo, is taken in a portrait format, and when the rendering device is oriented in the landscape format. As already discussed with reference to FIG. 1 of the prior art, this is expressed by the addition of black bars on either side of the video or the photo, thus impairing the user experience.

(17) According to one embodiment of the disclosed technology, the method of content enrichment thus comprises a first step of identification 101 of at least one enrichment content intended to be rendered in a region of the rendering zone that is distinct from a region for rendering the source multimedia content CS, for example as a replacement of the black bars of the prior art. According to the disclosed technology, the identification 101 of at least one enrichment content C+ is implemented from the source multimedia content CS, thus making it possible to obtain an enrichment content C+ that is related to/or matches the source multimedia content CS, making the overall rendering consistent and harmonious for the user.

(18) In the examples described here below, the enrichment content therefore constitutes a background of the source content with a rendering that is soft-focused and contrasted relative to the source content. The visual impression which makes for a better user experience can be explained by the fact that the enrichment content is rendered on either side of the source content in its source format (i.e. its snapshot format), the entire rendering zone being occupied by a content other than a black background with, in the foreground, the source content.

(19) A different implementation, not described in detail and not illustrated, could consist of the rendering of the enrichment content on either side of the source content, or else the rendering of two distinct enrichment contents on either side of the source content, the idea being, as already explained, to replace the black bars by a content presenting the meaning/link relative to the source content.

(20) For example, the enrichment content C+.sub.1, according to a first variant of implementation, corresponds to a portion or an extract of the source multimedia content CS, considered as being representative of the source multimedia content CS. To this end, at least one of the following criteria is applied in order to identify the enrichment content C+.sub.1: the presence of recurrent personalities, one or more dominant or recurrent colors, one or more dominant or recurrent backgrounds or landscapes. This first variant is illustrated for example in FIG. 5a, illustrating the rendering of a source multimedia content CS and, on each side, a portion of this source multimedia content CS representing more particularly the central personality of the source content. Besides, a filter has been applied to the identified portion to make it blurred. This first variant is described in greater detail here below, with reference to the second step of the method of enrichment. According to this first variant, if the source content corresponds to a photo taken in a forest landscape with a predominance of green, then the enrichment content may correspond to a portion of the image representing trees.

(21) According to a second variant of implementation, the enrichment content C+.sub.2 corresponds to a multimedia content imported from a database, or a library of images, videos, animations or again texts considered here too as representative of the source multimedia content CS. To this end, the at least one of the following criteria of similitude is applied in order to identify the enrichment content C+.sub.2: dominant or recurrent color or colors of the content to be selected in the database similar to one of the colors of the source content, dominant or recurrent background or landscape similar to a background or landscape of the source content. This second variant is illustrated for example in FIG. 5b illustrating the rendering of a source multimedia content CS and, on each side, a background presenting geometrical shapes in tones close to those of the upper part of the source content (i.e. a background of blue tones similar to those of the sky).

(22) According to a third variant illustrated in FIG. 5c, the enrichment content C+.sub.3 corresponds to a background having a beach landscape (consistent with the source content representing a little girl on a pontoon moving forward in the sea, with a blue sky in the background), as well as a customized message. Such a contextualized or personalized enrichment content C+.sub.3 can require an action by the user at the time of the capture of the source content, so as to define for example the text to be added to the enrichment content.

(23) The method of content enrichment then comprises a step of enrichment 102 of the source multimedia content CS, taking account of the preliminarily identified enrichment content and delivering at least one enrichment content CS.sub.enr. According to the disclosed technology, the enriched multimedia content CS.sub.enr comprises at least one piece of enrichment data D+ representative of a rendering of the enrichment content C+ in a region of the rendering zone that is distinct from a region for rendering a source multimedia content CS.

(24) Thus, in taking account of the preliminarily identified enrichment content, an enriched content is obtained, comprising especially the source content and at least one piece of enrichment data, for example in the form of metadata, that enables the retrieval of the enrichment content for its rendering without having to add it as such to the source content. In this way, the enriched content does not have a size (in bytes) that is very different from the source content, thus optimizing the resources for its storage and/or for its transmission.

(25) For example, the enrichment data is derived from a processing of the enrichment content in order to extract one or more pieces of information therefrom that make it possible to render it, such as for example: an image number in a source video, as well as one or more spatial coordinates enabling the identification of a portion in this image; one or more spatial coordinates enabling the identification of a portion in a source image; an identifier (URL or pointer) of an image/video/animation/text in a database or library; one or more identifiers of parameters to be applied, at the time of the rendering, to the enrichment content, such as for example a level of blur to obtain a contrasted effect of the enrichment content relative to the source content.

(26) Besides, according to one particular feature of the disclosed technology, the piece or pieces of enrichment data correspond to metadata transmitted at the same time as the source content in an enriched content CS.sub.enr. These metadata are then read, at the time of the rendering of the enriched content, so as to be able to render the source content and the enrichment content if the format of the rendering zone is different from that of the source content.

(27) If we consider the first alternative embodiment described here above, in which the enrichment content corresponds to a portion of the source content (video), the enrichment data must enable the rendering of this portion of the source content, and therefore comprise for example: one or more image numbers depending on whether it is chosen to extract the enrichment content several times during the rendering (i.e. the enrichment content changes as and when the source content is rendered, in order to adapt thereto and therefore to offer a consistent and smooth rendering throughout the rendering) or extract if only from an identified reference image; one or more spatial coordinates, obtained for example according to the process described here below with reference to FIG. 3; a piece of information for setting the parameters of a conversion to be applied to the enrichment content, such as for example a degree of blurred rendering.

(28) FIG. 3 therefore illustrates the result of a process for obtaining/computing spatial coordinates used to identify the enrichment content to be rendered. First of all, the source content CS (image or video) is analyzed, for example by reading metadata already present (see here below) in this source content, so as to find out especially the snapshot format, for example portrait or landscape. The knowledge of this format makes it possible especially to set the parameters of the size of the content portion that will form the enrichment content. Then, an analysis of all the images of the source video is implemented, for example via an artificial intelligence algorithm, to identify a representative portion (in this case around the child), in taking account of criteria already described here above (recurrent personalities, dominant color or colors, etc.). On the basis of this representative portion, the coordinates of P1 and P2 are computed and then the spatial reference coordinate P3 is computed from P1 and P2. It is this coordinate that will advantageously be stored in the metadata of the enriched content. Indeed, the advantage of storing a single coordinate P3 and not the coordinates P1 and P2 lies especially in the adaptation of the format of the enrichment content at the time of the rendering. Thus, during the reading of the video on a rendering device, the knowledge of the coordinate P3 enables the re-computation of P1 and P2 as a function of the resolution/format of the rendering device which plays the video. These steps therefore make it possible to determine a rectangle in the images according to the landscape width/height ratio, if for example the video has been recorded in the portrait format, the goal of the disclosed technology being to adapt the rendering of a source content when the rendering zone has a ratio different from that of the source content. As illustrated in FIG. 3, the identified rectangle has a landscape format while the source content has been taken in the portrait format, because the rectangle corresponding to the enrichment content is intended to be rendered in the background of the source content only when a user wishes to view the source content in landscape mode.

(29) Once the spatial coordinates P1, P2 and P3 have been obtained, the coordinate P3 is inserted, in metadata form, into the source content to deliver an enriched content.

(30) Several known text formats have been exploited to describe metadata, such as for example XML (Extensible Markup Language), or again XMP (Extensible Metadata Platform) or again JSON (JavaScript Object Notation). A novel format can also be defined to describe the enrichment data intended for insertion into the source content to obtain an enriched content.

(31) For example, if we refer to XMP, there is an “Orientation” field which corresponds to a field of characters that can have a value “Horizontal” or “Vertical” making it possible to know the snapshot format (portrait or landscape).

(32) According to the disclosed technology, several fields could be added such as those described in the table below:

(33) TABLE-US-00001 TABLE 1 Field name (or “Tag”) Possible values Description EnrichVideoMode YES/NO Field for reporting that the enriched content is available AutoFrame YES/NO Field indicating that all the images of the source content are to be exploited (YES) or only one image is to be exploited (NO) MonoFrame Image reference Field enabling knowledge of the reference image if the “AutoFrame” field is at NO EnrichFrameCenter X, Y coordinate Point P3 computed by the of P3 method of enrichment EnrichFrameOperation Blur Example of setting parameters of blurred rendering

(34) These fields, transmitted in the enriched content in addition to the source content, make it possible at the time of the rendering to display the identified enrichment content when the format of the rendering zone is different from that of the source content.

(35) As already indicated here above, it is possible to transmit, via the metadata, only one spatial coordinate P3 enabling the computation of the other coordinates P1 and P2 to retrieve the enrichment content to be rendered.

(36) According to the embodiment described here above and its variants, the method of enrichment of a source multimedia content can be implemented equally well by a camera device (for example a smartphone, a tablet, etc.) as by a content storage device (for example a contents server) or again by a rendering device (for example a smartphone, a tablet, a computer, etc.).

(37) According to a second embodiment in which the method of enrichment of a multimedia source content is implemented by the rendering device itself, an oculometry disclosed technology is used to identify the enrichment content. Thus, the enrichment content is updated regularly, as and when the enriched content is rendered as a function of the results of the tracking of the user's eye movements. To this end, predetermined criteria can be applied to the source content such as for example: a minimum duration for the fixing of a part of the screen by the user in order to identify a portion of the source content representative of an interest of the user; a frequency of updating of the enrichment content, in order to preserve a consistent and smooth user experience, in not changing the enrichment content too frequently, so as not to disturb the user, while at the same time adapting the enrichment content to the development of the source content while it is being rendered. For example, if the source video shows landscapes and then personalities, the enrichment content could in a first stage be representative of the landscapes (a color or a background) and in a second stage it could be representative of one or more recurrent personalities.

(38) According to this second embodiment, the piece or pieces of enrichment data D+ correspond for example to a spatial coordinate representative of the center of a zone of interest of the user located in the source content by means of a disclosed technology of oculometry/tracking the user's gaze.

(39) Referring now to FIGS. 4, 5a to 5c and 6, a description is provided of a method for rendering an enriched multimedia content according to one embodiment of the disclosed technology.

(40) The main steps of this method for rendering are illustrated in FIG. 4 and are implemented by a rendering device. Besides, the enriched multimedia content CS.sub.enr according to any one of the embodiments of the method of enrichment described here above comprise at least one source multimedia content CS and at least one piece of enrichment data D.sub.+ representative of a rendering of an enrichment content C+ in a region of the rendering zone (of the rendering device) distinct from a region for rendering the source multimedia content CS.

(41) In a first stage, a step of detection 201 of a difference between a width/height ratio of the rendering zone and a width/height ratio of the source multimedia content CS is implemented, so as to determine whether the enrichment content must be rendered in addition to the source content.

(42) Thus, if a difference is detected, a step for processing 202 the piece of enrichment data D.sub.+ is implemented, for example to read the metadata of the enriched content and thus deliver the enrichment content C+ identified by the method of enrichment. This processing step 202 therefore enables the rebuilding/retrieval of the enrichment content C+ and the obtaining of associated parameters if any, such as for example a blurred rendering to be applied. As already discussed here above, the enrichment data D.sub.+ can for example correspond to an image number in a video and a spatial coordinate enabling the rebuilding of a rectangle in the format of the rendering zone (for example the landscape format) as well as a degree of blurred rendering.

(43) A step of rendering 203 of the source multimedia content CS and of the enrichment content C+ respectively in two distinct regions of the rendering zone is then implemented. For example, the enrichment content is rendered on each side of the source content, rendered in its portrait format, at the center of the rendering zone.

(44) Examples of rendering of an enriched content are illustrated especially in FIGS. 5a to 5c, according to the three variants described here above for the method of enrichment, in which respectively: the enrichment content C+.sub.1 corresponds to a portion of the source content; the enrichment content C+.sub.2 corresponds to a geometrical background chosen from a library of images; the enrichment content C+.sub.1 corresponds to an image chosen from a library of images and a contextual text (chosen or not chosen by the original user from the source content).

(45) According to one particular feature, the method for rendering, if no difference of width/height ratio is detected between the rendering zone and the source content CS, i.e. when the source content is rendered in its snapshot format (in portrait mode for example), the disclosed technology provides for the display of an indicator of presence of an enrichment content, addressed to the user viewing the source content. In this way, this user can, if he prefers, change the orientation of his rendering device without being afraid that the rendering might deteriorate.

(46) To this end, the enrichment data D+ present in the enriched content CS.sub.enr is detected, during a step of detection, so as to make sure that an enrichment content C+ is truly available. Then, an indicator of presence I.sub.pres of enrichment content is displayed in addition to the source content CS on the screen of the rendering device as illustrated in FIG. 6.

(47) For example, one of fields of metadata described in the table above (“EnrichVideoMode”), corresponding to a piece of enrichment data can be detected in order to make sure of the presence of an enrichment content C+.

(48) The indicator of presence I.sub.pres of enrichment content can also be displayed provided only one piece of enrichment data (any one) is present in the enriched content.

(49) The different embodiments and variants described here above therefore make it possible to provide a consistent and smooth display of a source content when it is rendered in a rendering zone (of a rendering device) having a width/height ratio different from that of the source content itself. This can be the case for example when a photo or a video is captured in portrait mode and rendered in landscape mode, and vice versa and in any other specific rendering format different from the exposure format.

(50) Referring now to FIG. 7, we present the hardware structure of an enrichment device 10 according to one embodiment of the disclosed technology.

(51) The term “enrichment device” can correspond equally well to a software component as to a hardware component or a set of hardware and software components, a software component itself corresponding to one or more computer programs or sub-programs or more generally to any element of a program capable of implementing a function or a set of functions.

(52) More generally, such an enrichment device comprises a live memory 13 (for example a random-access memory or RAM), a processing unit 12 equipped for example with a processor and driven by a computer program 11. At initialization, the code instructions of the computer program are for example loaded into the live memory 13 and then executed by the processor of the processing unit 12. The live memory 13 contains especially the criteria of identification of enrichment content, the rules for computing at least one spatial coordinate representative of the enrichment content, etc. The processor of the processing unit 12 implements the steps of the method of enrichment according to the instructions of the computer program 11 in order to: identify, from the source multimedia content, at least one enrichment content intended to be rendered in a region of the rendering zone distinct from a region for rendering the source multimedia content; enrich the source multimedia content in taking account of the at least one enrichment content, and deliver at least one enriched multimedia content comprising at least one piece of enrichment data representative of a rendering of the enrichment content in a region of the rendering zone distinct from a region for rendering the source multimedia content.

(53) FIG. 7 illustrates only one particular way, among several possible ways, of obtaining the enrichment device so that it will perform the steps of the method described in detail here above with reference to FIG. 1 to FIG. 6 (in any one of the different embodiments or in a combination of these embodiments). Indeed, these steps can be obtained equally well on a reprogrammable computation machine (a PC computer, a DSP processor or a microcontroller) executing a program comprising a sequence of instructions or on a dedicated computing machine (for example a set of logic gates such as an FPGA or an ASIC or any other hardware module).

(54) Should the enrichment device be obtained with a reprogrammable computing machine, the corresponding program (i.e. the sequence of instructions) could be stored in a storage medium that is detachable (such as for example a floppy disk, a CD ROM or a DVD ROM) or non-detachable, this storage medium being partially or totally readable by a computer or a processor.

(55) Finally, referring to FIG. 8, we present the hardware structure of a rendering device 210 according to one embodiment of the disclosed technology.

(56) The term “rendering device” can correspond equally well to a software component as to a hardware component or a set of hardware and software components, a software component itself corresponding to one or more computer programs or sub-programs or more generally to any element of a program capable of implementing a function or a set of functions.

(57) More generally, such a rendering device comprises a live memory 23 (for example a random-access memory or RAM), a processing unit 22 equipped for example with a processor and driven by a computer program 21. At initialization, the code instructions of the computer program are for example loaded into the live memory 23 and then executed by the processor of the processing unit 22. The processor of the processing unit 22 implements the steps of the method for rendering according to the instructions of the computer program 21 in order to: detect a difference between a width/height ratio of the rendering zone and a width/height ratio of the source multimedia content; if a difference is detected: process the at least one piece of enrichment data and deliver the enrichment content; render the source multimedia content and the enrichment content respectively in two distinct regions of the rendering zone.

(58) FIG. 8 illustrates only one particular way among several possible ways of obtaining the rendering device so that it will perform the steps of the method described in detail here above with reference to FIGS. 1 to 6 (in any one of the different embodiments or in a combination of these embodiments). Indeed, these steps can be obtained equally well on a reprogrammable computation machine (a PC computer, a DSP processor or a microcontroller) executing a program comprising a sequence of instructions or on a dedicated computing machine (for example a set of logic gates such as an FPGA, an ASIC or any other hardware module).

(59) Should the rendering device be obtained with a reprogrammable computing machine, the corresponding program (i.e. the sequence of instructions) could be stored in a storage medium that is detachable (such as for example a floppy disk, a CD ROM or a DVD ROM) or non-detachable, this storage medium being partially or totally readable by a computer or a processor.