Projector and method of projecting an image

11531254 · 2022-12-20

Assignee

Inventors

Cpc classification

International classification

Abstract

The disclosed subject matter relates to a method of projecting an image by means of a light source emitting light pulses and an oscillating micro-electro-mechanical system (MEMS) mirror deflecting the emitted light pulses, comprising: providing a matrix of durations for each pixel, and incrementing or decrementing a pixel index whenever a respective duration indexed by the respective pixel indices in the playout matrix has lapsed; for each light pulse: retrieving the respective intensity and durations indexed by the current pixel indices, calculating an interval from at least one of said durations, emitting said light pulse with said retrieved intensity, and waiting said calculated interval before emitting the next light pulse. The disclosed subject matter further relates to a projector carrying out said method.

Claims

1. A method of projecting an image provided as a matrix of pixels with intensities onto a projection area by means of a light source emitting a train of light pulses with variable intensities and intervals and a micro-electro-mechanical system, MEMS, mirror oscillating about a horizontal axis with a horizontal oscillation period and about a vertical axis with a vertical oscillation period and deflecting the emitted light pulses, the method comprising: providing a playout matrix of a horizontal and a vertical duration for each pixel, and for a first half of every horizontal oscillation period, incrementing, and for a second half of every horizontal oscillation period, decrementing a horizontal pixel index whenever one of the horizontal durations indexed by the current horizontal pixel index in the playout matrix has lapsed; for a first half of every vertical oscillation period, incrementing, and for a second half of every vertical oscillation period, decrementing a vertical pixel index whenever one of the vertical durations indexed by the current vertical pixel index in the playout matrix has lapsed; for each light pulse in the train: retrieving the respective intensity from the pixel matrix indexed by the current horizontal and vertical pixel indices, retrieving the respective horizontal and vertical durations from the playout matrix indexed by the current horizontal and vertical pixel indices, calculating an interval from at least one of said respective horizontal and vertical durations, emitting said light pulse with said retrieved intensity, and waiting said calculated interval before emitting the next light pulse in the train.

2. The method according to claim 1, wherein the playout matrix is decomposed into a pair of floor values common to all elements of the playout matrix, a residual vector comprised of first residual values for horizontal and/or vertical durations common to all rows or columns of the playout matrix, and a residual matrix comprised of second residual values for horizontal and/or vertical durations, wherein each second residual value is provided with a shorter bit length than the floor or first residual values, and that said retrieving of the respective horizontal and vertical durations is made by combining the floor values with the respective first and second residual values indexed by the current horizontal and vertical pixel indices.

3. The method according to claim 2, wherein at least a part of the horizontal durations in the first and last columns of the playout matrix is provided, instead of in the playout matrix, in at least one horizontal offset vector which is used in calculating the intervals for the pixels in said first and last columns.

4. The method according to claim 2, wherein at least a part of the vertical durations in the first and last rows of the playout matrix is provided, instead of in the playout matrix, in at least one vertical offset vector which is used in calculating the intervals for the pixels in said first and last rows.

5. The method according to claim 2, wherein each first residual value is stored as an increment with respect to a neighbouring first residual value in the residual vector.

6. The method according to claim 2, wherein each second residual value is stored as an increment with respect to a neighbouring second residual value in the residual matrix.

7. The method according to claim 1, wherein the horizontal pixel index is incremented, or decremented, respectively, whenever the horizontal duration indexed by the current horizontal and vertical pixel indices in the playout matrix has lapsed, and that the vertical pixel index is incremented, or decremented, respectively, whenever the vertical duration indexed by the current horizontal and vertical pixel indices in the playout matrix has lapsed.

8. The method according to claim 1, wherein the playout matrix is stored in a lower pixel resolution than the pixel matrix and, when indexing a horizontal or vertical duration, is oversampled to the resolution of the pixel matrix.

9. The method according to claim 1, wherein the playout matrix is stored in a memory saving format by exploiting a symmetry of the playout matrix.

10. The method according to claim 1, wherein the image is provided as matrix of pixels with intensities for two or more colours, the light source comprises for each of said colours a laser emitting a train of light pulses of the respective colour with variable intensities and intervals and the MEMS mirror deflects the emitted light pulses of each of said colours, wherein the steps of providing, incrementing, retrieving, calculating, emitting and waiting are performed separately for each of said colours.

11. A projector, comprising a first memory for providing an image as a matrix of pixels with intensities, a light source configured to emit a train of light pulses with variable intensities and intervals, a micro-electro-mechanical system, MEMS, mirror configured to oscillate about a horizontal axis with a horizontal oscillation period and about a vertical axis with a vertical oscillation period and to deflect the emitted light pulses onto a projection area, a second memory containing a playout matrix of a horizontal and a vertical duration for each pixel, and a processor connected to the first memory, the light source, the MEMS mirror, and the second memory, wherein the processor is configured to for a first half of every horizontal oscillation period, increment, and for a second half of every horizontal oscillation period, decrement a horizontal pixel index whenever one of the horizontal durations indexed by the current horizontal pixel index in the playout matrix has lapsed; for a first half of every vertical oscillation period, increment, and for a second half of every vertical oscillation period, decrement a vertical pixel index whenever one of the vertical durations indexed by the current vertical pixel index in the playout matrix has lapsed, and for each light pulse in the train: retrieve the respective intensity from the pixel matrix indexed by the current horizontal and vertical pixel indices, retrieve the respective horizontal and vertical durations from the playout matrix indexed by the current horizontal and vertical pixel indices, calculate an interval from at least one of said respective horizontal and vertical durations, emitting said light pulse via the light source with said retrieved intensity, and wait said calculated interval before emitting the next light pulse in the train.

12. The projector according to claim 11, wherein the playout matrix in the second memory is decomposed into a pair of floor values common to all elements of the playout matrix, a residual vector comprised of first residual values for horizontal and/or vertical durations common to all rows or columns of the playout matrix, and a residual matrix comprised of second residual values for horizontal and/or vertical durations, wherein each second residual value is provided with a shorter bit length than the floor or first residual values, and that the processor is configured to retrieve the respective horizontal and vertical durations by combining the floor values with the respective first and second residual values indexed by the current horizontal and vertical pixel indices.

13. The projector according to claim 12, wherein at least a part of the horizontal durations in the first and last columns of the playout matrix is stored in the second memory, instead of in the playout matrix, in at least one horizontal offset vector which is used in calculating the intervals for the pixels in said first and last columns, and/or that at least a part of the vertical durations in the first and last rows of the playout matrix is stored in the second memory, instead of in the playout matrix, in at least one vertical offset vector which is used in calculating the intervals for the pixels in said first and last rows.

14. The projector according to claim 11, wherein the playout matrix is stored in the second memory in a lower pixel resolution than the pixel matrix in the first memory and that the processor is configured to oversample the playout matrix to the resolution of the pixel matrix when indexing a horizontal or vertical duration.

15. The projector according to claim 11, wherein the image is providable in the first memory as a matrix of pixels with intensities for two or more colours, that the light source comprises for each of said colours a laser for emitting a respective train of light pulses of the respective colour with variable intensities and intervals, and that the MEMS mirror is configured to deflect the emitted light pulses of each of said colours, wherein the processor is configured to perform the steps of providing, incrementing, retrieving, calculating, emitting and waiting separately for each of said colours.

16. The projector according to claim 15, wherein the image is providable in the first memory as a matrix of pixels with intensities for the three colours red, green and blue.

Description

BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES

(1) The disclosed subject matter shall now be explained in more detail below on the basis of exemplary embodiments thereof with reference to the accompanying drawings, in which show:

(2) FIG. 1 a projector according to the disclosed subject matter in the process of projecting an image onto a screen in a perspective view;

(3) FIG. 2 a plot of the intensity of emission of the projector of FIG. 1 as a function of time;

(4) FIG. 3 a plot of the angular position of the MEMS mirror of the projector of FIG. 1 about the horizontal axis as a function of time;

(5) FIG. 4 the projector of FIG. 1 in a schematic circuit diagram;

(6) FIG. 5a a decomposition of a playout matrix to be retrieved by the processor of the projector of FIGS. 1 and 4 in a three-dimensional plot (here only its horizontal durations are shown, its vertical durations being decomposed similarly);

(7) FIG. 5b a numerical example of the decomposition of the playout matrix of FIGS. 1 and 4, given for its horizontal durations;

(8) FIG. 6 a numerical example of an alternative decomposition of the playout matrix of FIGS. 1 and 4, given for its horizontal durations;

(9) FIG. 7 a numerical example of a further decomposition of the playout matrix of FIGS. 1 and 4, given for its horizontal durations;

(10) FIG. 8 the playout matrix of FIGS. 1 and 4 in a two-dimensional plot with greyscale coding of the horizontal durations;

(11) FIG. 9 a horizontal derivative of the playout matrix of FIG. 8 in a two-dimensional plot with greyscale coding of the derivatives of the horizontal durations; and

(12) FIG. 10 a vertical derivative of the playout matrix of FIG. 9 in a two-dimensional plot with greyscale coding of the vertical durations.

DETAILED DESCRIPTION

(13) FIG. 1 shows a projector 1 emitting a beam 2 of light pulses 3.sub.i (i=0, 1, 2, . . . ) to project an image 4 onto a wall 5. The image 4 may be a single image, e.g., a photo to be projected for a longer period of time, or be part of a movie M. Instead of a wall 5, the projector 1 could also emit the beam 2 of light pulses 3.sub.i onto any kind of surface, such as a board, projection screen, poster, the retina of an eye, an Augmented-Reality (AR) combiner waveguide, another combiner optics or the like.

(14) With reference to FIGS. 1 and 4, the projector 1 has a light source 6 for emitting the beam 2 and a micro-electro-mechanical system, MEMS, mirror 7 for deflecting the emitted beam 2 towards the wall 5. The MEMS mirror 7 oscillates about a horizontal axis 8 with a horizontal oscillation period T.sub.h and about a vertical axis 9 with a vertical oscillation period T.sub.v to scan the emitted beam 2 over a projection area 10 on the wall 5 along a trajectory 11.

(15) The light source 6 may be any light source known in the art, e.g., an incandescent lamp, a gas, liquid or solid laser, a laser diode, an LED, etc. The MEMS mirror 7 may either comprise one reflective surface 12 oscillating about the horizontal and vertical axes 8, 9 or two reflective surfaces 12, one after the other in the optical path of the light beam 2, each of which surfaces 12 then oscillating about one of the horizontal and vertical axes 8, 9.

(16) In the embodiment shown in FIG. 1, the horizontal oscillation period T.sub.h is much shorter than the vertical oscillation period T.sub.v, and hence the projection area 10 is scanned by the light beam 2 in substantially horizontal meander lines (“line by line”) along the trajectory 11. Alternatively, T.sub.h may be much longer than T.sub.v to scan the projection area 10 with a trajectory 11 of substantially vertical meander lines. In general, T.sub.h and T.sub.v may be chosen arbitrarily, resulting in a scanning of the projection area 10 by a Lissajous curve.

(17) The projection area 10 is geometrically distorted due to the orientation of the projector 1 with respect to the wall 5, a possible curvature of the wall 5, and any intrinsic interdependencies of the oscillations of the MEMS mirror 7 about the horizontal and vertical axes 8, 9. Furthermore, in an uncompensated case (not shown), the non-linear oscillation movement of the MEMS mirror 7 leads to an unequal spatial distribution of periodically emitted light pulses 3.sub.i on the projection area 10. More light pulses 3.sub.i are emitted closer to a boundary 13 of the projection area 10 than at its centre 14. Besides a geometrical distortion, this causes an uneven brightness of the projection area 10.

(18) The projector 1 compensates for these distortions and uneven brightness by controlling the timing of the light pulses 3.sub.i so that they are projected in an image area 15 within the projection area 10 in an equidistant grid 16 of cells 17, one light pulse 3.sub.i per cell 17 and one cell 17 per pixel P.sub.x,y of the image 4, as best as possible.

(19) FIG. 2 shows a sequence or train S of light pulses 3.sub.i within the light beam 2 as light pulse intensities I over time t, and FIG. 3 shows the train S of light pulses 3.sub.i with respect to the horizontal angular position φ.sub.h of the MEMS mirror 7 over time t. Each light pulse 3.sub.i is emitted at a respective time t, with a pulse width pw and a respective intensity I.sub.i to project a pixel P.sub.x,y onto the corresponding cell 17 of the grid 16. The intensity I.sub.i of each pixel P.sub.x,y and hence light pulse 3.sub.i is provided in the image 4. The times t.sub.i, and in particular the time intervals Δt.sub.i between each two subsequent light pulses 3.sub.i, 3.sub.i+1, are calculated to yield a substantially equidistant projection of the pixels P.sub.x,y in the image area 15 as described later.

(20) When the trajectory 11 reaches a boundary 18 of the image area 15 a longer time interval Δt.sub.i+1 lapses between the light pulse 3.sub.i+1 and the light pulse 3.sub.i+2 respectively corresponding to pixel P.sub.4,1 and pixel P.sub.5,1 (FIG. 1). The time intervals Δt.sub.i could principally be chosen such that the image area 15 is the largest rectangular region fitting in the projection area 10. In the variant shown in FIG. 3 only a quasi-linear regime t.sub.a of the oscillation period T.sub.h and T.sub.v is used to project the light pulses 3.sub.i. Hence, the boundary 18 of the image area 15 and the boundary 13 of the projection area 10 are in a horizontal offset 19 and a vertical offset 20, each depending on the position in the projection area 10.

(21) The pulse width pw is equal for all pulses 3.sub.i in the train S of FIG. 2 to reach a unique brightness throughout the equidistant grid 16. The pulse width pw may be chosen as long as possible to maximise the brightness of the projected image, however, must not exceed the minimal expected time interval Δt.sub.i between two subsequent pulses 3.sub.i, 3.sub.i+1. Alternatively, different pulse widths pw may be applied for different pulses 3.sub.i, e.g., to correct for an uneven absorption of the wall 5.

(22) As shown in FIG. 4, the projector 1 has a microprocessor MP which is connected to a memory 21 where the image 4 to be projected is stored as a matrix 22 of intensities I(x,y) for the pixels P.sub.x,y. The microprocessor MP determines in blocks 23 and 24 for each light pulse 3.sub.i in the train S a horizontal pixel index x and a vertical pixel index y corresponding to the current angular horizontal position φ.sub.h and angular vertical position φ.sub.v of the MEMS mirror 7 and hence the current position of the light beam 2 along the trajectory 11 over the projection area 10, as will be explained later in detail.

(23) On the one hand, the pixel indices x, y are used to retrieve the respective intensities I(x,y) of the pixels P.sub.x,y from the memory 21 and to apply them as intensities I.sub.i to the pulses 3.sub.i. The pulses 3.sub.i are generated on the basis of a system clock 25 by a pulse generator 26 in individual time intervals Δt.sub.i, modulated with their intensities I.sub.i in a modulator 27 and sent out via the light source 6 in the light beam 2 carrying the pulse train S.

(24) On the other hand, the pixel indices x, y determined by the microprocessor MP are used to retrieve respective horizontal and vertical pixel durations d.sub.h(x,y), d.sub.v(x,y) from a “playout” matrix 28 in a memory 29 and to calculate the time intervals Δt.sub.i for the pulse generator 26 therefrom in blocks 30, 31 and 32 as explained below.

(25) For performing these tasks, the microprocessor MP and in particular each of the blocks 23-27, 30-32 may be either implemented in software, e.g., as a function, an object, a class, etc., or in hardware, e.g., as an integrated circuit element, as an area in an ASIC, FPGA, etc., or as a mixture of hard and software elements.

(26) The horizontal and vertical durations d.sub.h(x,y), d.sub.v(x,y) in the playout matrix 28 each represent a time span within the respective mirror oscillation period T.sub.h, T.sub.v in which time span the trajectory 11 would traverse a whole width w or height h, respectively, of a region 33 assigned to that pixel P.sub.x,y in the projection area 10.

(27) As shown in FIG. 1, for each inner, i.e., “non-boundary” pixel P.sub.x,y of the grid 16, the region 33 simply is the corresponding cell 17 of the grid 16. For each pixel P.sub.x,y at the boundary 18 of the grid 16, i.e., for pixels P.sub.x,y in the first or last rows r.sub.f, r.sub.l or columns c.sub.f, c.sub.l of the grid 16, the region 33 includes, in addition to the corresponding cell 17, also the adjacent horizontal offset 19, or vertical offset 20, respectively, of the image area 15 to the projection area 10, as exemplarily shown for the bottom left corner pixel P.sub.13,1. For an equidistant grid 16 the durations d.sub.h, d.sub.v for inner pixels P.sub.x,y thus represent the reciprocal values of the local horizontal and vertical angular velocities of the MEMS mirror 7, and for boundary pixels P.sub.x,y they additionally indicate a measure of the respective offset 19, 20.

(28) One practical possibility to fill the playout matrix 28 with appropriate values so that an equidistant grid with one light pulse 3.sub.i per grid cell 17 is obtained is to analytically determine the optical path of the light pulses 3.sub.i as a function of time t based on the mirror parameters and calculate the respective durations d.sub.h, d.sub.v for each pixel P.sub.x,y. Another possibility is to measure the trajectory 11 and distribution of periodically emitted light pulses 3.sub.i on the projection area 10, e.g., by means of a camera, and then calculate corresponding durations d.sub.h, d.sub.v therefrom for an equidistant, undistorted projection. Whenever the mirror parameters change significantly, e.g., due to aging or for different ambient temperatures, a different playout matrix 28 may be provided, either analytically, pre-calculated or calculated on the basis of new calibration measurements.

(29) The horizontal and vertical durations d.sub.h, d.sub.v may be contained in the playout matrix 28 as shown in FIG. 4, i.e., as two matrices each holding the respective horizontal or vertical durations d.sub.h, d.sub.v, or as a single matrix holding doublets, each of a horizontal and a vertical duration d.sub.h, d.sub.v.

(30) The pixel indices x, y corresponding to the current position of the light beam 2 on the projection area 11 are determined iteratively by blocks 23, 24 as follows. Basically, the indices x and y are determined independently of each other, i.e., index x in block 23 and index y in block 24.

(31) The pixel index determination processes in blocks 23, 24 are synchronized to the respective oscillations of the MEMS mirror 7 about the horizontal and vertical axes 8, 9. To this end, block 23 for determining the pixel index x periodically receives a synchronisation signal x_sync from a drive 34 of the MEMS mirror 7. The synchronisation signal x_sync resets the pixel index x to a predetermined starting point, e.g., to x=1 when the light beam 2 is at one of the left turning points 11.sub.1 of the trajectory 11 corresponding to the minima of the curve depicted in FIG. 3. Or, the synchronisation signal x_sync could be output by the drive 34 at every zero crossing of the curve of FIG. 3 which corresponds to a pixel in the centre column of the grid 16, and the pixel index x would then be reset to the x index of the centre column.

(32) It should be noted that it is not necessary to have a synchronisation signal x_sync once or twice every horizontal oscillation period T.sub.h. Only one synchronization signal x_sync could be received every second, third, fourth etc. oscillation period T.sub.h. As the pixel index x will be incremented and decremented on the basis of accumulating the horizontal durations d.sub.h retrieved from the playout matrix 28 as explained below, the synchronization signal x_sync is used just for resynchronizing the x pixel determination process from time to time to counteract the accumulation of possible errors of inaccuracies of the durations d.sub.h.

(33) Similarly, block 24 for determining the pixel index y receives a synchronization signal y_sync from the drive 34 of the MEMS mirror 7 indicative of a predetermined time within each vertical oscillation period T.sub.v to reset the pixel index y to a predetermined starting point, e.g. y=1, for resynchronizing the y pixel determination to the vertical mirror oscillation from time to time.

(34) In each iteration in blocks 23, 24 first the durations d.sub.v(x,y) and d.sub.h(x,y) stored for the current pixel P.sub.x,y indexed by the current pixel indices x, y are retrieved from the playout matrix 28. Since the playout matrix 28 is stored in the memory 29 in a very specific way as will be explained further down in detail, during said retrieving blocks 30, 31 reconstruct the playout matrix 28 from the memory 29 and output the respective durations d.sub.h, d.sub.v indexed by the current pixel indices x, y to blocks 23 and 24 via paths 35, respectively.

(35) Then, in block 23 (the analogous step in block 24 is described later), when such a new horizontal duration d.sub.h(x,y) is received, a timer monitors the lapse of the horizontal duration d.sub.h(x,y), e.g., by counting the system time t received from the system clock 25. When the horizontal duration d.sub.h(x,y) has lapsed, the horizontal pixel index x is incremented and the next iteration begins, i.e., block 23 starts anew with retrieving the next horizontal duration d.sub.h(x,y).

(36) The pixel index x is thus incremented iteratively in block 23 until it has reached either the maximum pixel index of the grid 16 or half of the oscillation period T.sub.h has lapsed. In the following iterations, the pixel index x is then decremented until it has either returned to its starting value x=1 or another half of the oscillation period T.sub.h has lapsed. Hence, in both cases the pixel index x is incremented for the first half and decremented for the second half of every horizontal oscillation period T.sub.h.

(37) In block 24 analogous iterations steps are performed for the pixel index y. The pixel index y is incremented for the first half and decremented for the second half of the vertical oscillation period T.sub.v whenever the current vertical duration d.sub.v(x,y) retrieved from the playout matrix 28 lapses.

(38) While the blocks 23 and 24 determine the respective current pixel indices x, y, each time one of the pixel indices x, y changes a “new” pixel P.sub.x,y indexed by the newly changed pixel indices x, y is “played out”, i.e., a new light pulse 3.sub.i+1 is sent following the current light pulse 3.sub.i. The time interval Δ.sub.i which is to be waited before the new light pulse 3.sub.i+1 in the train S is sent is calculated in block 32 as a function f(d.sub.h,d.sub.v) of the horizontal and vertical durations d.sub.h, d.sub.v, more precisely, of those durations d.sub.h, d.sub.v that have just been retrieved under the current pixel indices x, y from the playout matrix 28, see paths 36. Therefore, when sending out the train S of light pulses 3.sub.i the intensity I.sub.i(x,y) to be applied to a light pulse 3.sub.i by the modulator 27 for a pixel P.sub.x,y is retrieved from the matrix 22 in the memory 21 under the current pixel indices x, y, and the waiting interval Δt.sub.i for sending a respective subsequent light pulse 3.sub.i+1 is calculated in block 32 as a function of the horizontal and vertical durations d.sub.h(x,y), d.sub.v(x,y) retrieved from the playout matrix 28 in the memory 29.

(39) The block 32 may calculate the time interval Δ.sub.i in many ways. In the embodiment of FIG. 1 with a T.sub.v>>T.sub.h, the interval Δt.sub.i between two light pulses 3.sub.i, 3.sub.i+1 may be calculated by taking only the horizontal duration d.sub.h (minus the pulse width pw), i.e., Δt.sub.i(x,y)=d.sub.h(x,y)−pw.

(40) In the reversed case of T.sub.h>>T.sub.v, the interval Δt.sub.i may analogously be calculated by considering only the vertical duration d.sub.v (minus the pulse width pw), i.e., Δt.sub.i(x,y)=d.sub.v(x,y)−pw. In an intermediate case any combination of the horizontal and vertical durations d.sub.h, d.sub.v may be taken, e.g., Δt.sub.i=min(d.sub.h, d.sub.v)−pw.

(41) For all the above processes, a fast retrieval of the durations d.sub.h, d.sub.v from the playout matrix 28 is fundamental. To this end, a fast physical memory is used for the memory 29 such as an on board or internal memory of the processor MP. However, such onboard or internal processor memory is usually limited in space which would put a limit on the maximal resolution of the image 4 to project.

(42) FIGS. 5a and 5b illustrate a memory saving way of storing the playout matrix 28 in the second memory 29 so that a fast physical memory can be used. FIG. 5a shows only the horizontal durations d.sub.h(x,y) stored in the playout matrix 28 and their values depending on the pixel indices x, y and FIG. 5b a numerical exemplary decomposition of the horizontal durations d.sub.h(x,y). The same applies mutatis mutandis to the vertical durations d.sub.v(x,y) stored in the playout matrix 28, such that the following decomposition and variants thereof can be performed therefor analogously.

(43) The horizontal durations d.sub.h can be decomposed into a floor value F.sub.h common to all elements of the playout matrix 28, a residual vector V.sub.h(y) comprised of a “first” residual value 37 for each column c.sub.i i.e., which is common to all rows r.sub.i of the playout matrix 28, and a residual matrix M.sub.h(x,y) comprised of “second” residual values 38.

(44) Analogously, the vertical durations d.sub.v can be decomposed into a floor value F.sub.v common to all elements of the playout matrix 28, a residual vector V.sub.v(y) comprised of a “first” residual value 37 for each column c.sub.i i.e., which is common to all rows r.sub.i of the playout matrix 28, and a residual matrix M.sub.v(x,y) comprised of “second” residual values 38. For ease of understanding, the decomposition of the playout matrix 28 is described with reference to its horizontal floor value F.sub.h, residual vector V.sub.h and matrix M.sub.h in the following. However, it goes without saying that the same applies mutatis mutandis to its vertical floor value F.sub.v, residual vector V.sub.v and matrix M.sub.v.

(45) The second residual values 38 represent the difference between the respective sum of floor and first residual value F.sub.h, 37 and the respective duration d.sub.h(x,y). This difference has a smaller magnitude than the duration d.sub.h(x,y), and each second residual value 38 can thus be stored in the second memory 29 with a shorter bit length, e.g., with four bits when the floor and first residual values F.sub.h, 37 have a bit length of eight.

(46) Coming back to FIG. 4, the horizontal durations d.sub.h are retrieved from the second memory 29 by means of the retrieving block 30. The retrieving block 30 reconstructs each horizontal duration d.sub.h by combining, in this case adding, the respective floor, first and second residual value F.sub.h, 37, 38 from the residual vector V.sub.h(y) and residual matrix M.sub.h(x,y) indexed by the current horizontal and vertical pixel indices x, y. Analogously, the retrieving block 30 retrieves the vertical durations d.sub.v by combining the floor and the respective first and second residual values F.sub.v, 37 and 38 from the vertical residual vector V.sub.v(x) and residual matrix M.sub.v(x,y).

(47) Depending on the applied decomposition, the reconstruction may involve another combination, e.g., an appropriate non-linear decomposition (not shown) could be reconstructed by multiplying the floor F.sub.h and the respective first residual value 37 and adding/multiplying the second residual value 38, etc.

(48) The dynamics of the playout matrix 28 determine the memory savings achieved by the decomposition and, hence, should be taken into account when choosing the appropriate decomposition. In the example shown in FIG. 5a the durations d.sub.h(x,y) exhibited a large horizontal dynamics with mostly similar values in x-direction, i.e., for each column c.sub.i of the playout matrix 28. FIG. 6 shows an alternative decomposition of the playout matrix 28 when the durations d.sub.h(x,y) exhibit a large vertical dynamics with mostly similar values in y-direction. Here, the first residual vector V.sub.h(x) may be used to contain residual values 37 common to all columns c.sub.i of the playout matrix 28.

(49) In some applications, only a horizontal calibration of the image 4 to project may be sufficient. Hence, the horizontal durations d.sub.v may be fully decomposed into floor, first and second residual values F.sub.h, 37, 38 while the vertical durations d.sub.v are decomposed into a constant matrix, e.g., stored only as a floor value F.sub.v. Alternatively, a full horizontal and a partial vertical calibration can be performed by decomposing the horizontal durations d.sub.v into floor, first and second residual values F.sub.h, 37, 38 while the vertical durations d.sub.v are decomposed into a matrix with constant rows or columns, e.g., being stored only as a floor value F.sub.v and a residual vector V.sub.v. Of course, in all these examples the terms “horizontal” and “vertical” may be interchanged.

(50) Because of the horizontal and/or vertical offsets 19, 20, the durations d.sub.h in the first and last columns c.sub.f, c.sub.l or and/or rows r.sub.f, c.sub.l of the playout matrix 28 are significantly larger than in the rest of the playout matrix 28, which increases the dynamics of the playout matrix 28. To mitigate this problem, in a further variant shown in FIG. 7 a part of each of the horizontal durations d.sub.h in the first and last columns c.sub.f, c.sub.l of the playout matrix 28 may be stored, instead of in the playout matrix 28, in a horizontal offset vector O.sub.h. The respective elements of this offset vector O.sub.h are then also retrieved by the block 30 to reconstruct the durations d.sub.h.

(51) Alternatively, instead of one horizontal offset vector O.sub.h two horizontal offset vectors O.sub.h,f, O.sub.h,l can be used, one to store a part of the horizontal durations d.sub.h of the first column c.sub.l and the other to store a part of the horizontal durations d.sub.h of the last column c.sub.l.

(52) Analogously, vertical durations d.sub.v in the first and last rows r.sub.f, r.sub.l of the playout matrix 28 can be stored in a vertical offset vector O.sub.v and retrieved therefrom by block 31. Also here two vertical offset vectors O.sub.v,f, O.sub.v,l can be used, one to store a part of the vertical durations d.sub.v of the first row r.sub.l and the other to store a part of the vertical durations d.sub.v of the last row r.sub.l.

(53) A further memory reduction can be achieved by using an incremental storage scheme for the first and second residual values 37, 38 and/or the offset vectors O.sub.h, O.sub.v. As can be seen from FIGS. 8-10, the playout matrix 28 (here shown for the horizontal durations d.sub.h) has, due to a discrete resolution of storage, regions 39 of similar values which change in steps at region borders 40.

(54) This property of the playout matrix 28 can be exploited by storing the respective elements incrementally. The left side of FIG. 7 shows a variant of storing the offset vector O.sub.h in form of a vector ΔO.sub.h whose elements 41 are each stored as an increment with respect to a neighbouring element, one element 42 being stored as an absolute value. Only this element 42 is stored with a longer bit length than the other elements 41. Similarly, the right side of FIG. 7 shows the residual matrix M.sub.h with every residual value 38 being stored as an increment with respect to its neighbour (except for one absolute value). Optionally, also the first residual values 37 in the vectors V.sub.h, V.sub.v may be stored incrementally.

(55) When an incremental storage scheme is used, instead of signs of the increments only the positions of sign changes of the increments can be stored. The magnitudes of the increments can then be combined with the signs separately retrieved. In the example of FIGS. 8-10, the increments can then be stored using one single bit.

(56) Up to now, the x- and y-dimensions of the pixel matrix 22 and the playout matrix 28 were equal. In an alternative embodiment the playout matrix 28 can be stored in the second memory 29 in a lower pixel resolution than the pixel matrix 22 and be oversampled to the resolution of the pixel matrix 22 when indexing a horizontal or vertical duration d.sub.h, d.sub.v. The lower resolution may, e.g., be obtained by averaging neighbouring durations d.sub.h, d.sub.v or exploiting a symmetry of the playout matrix 28.

(57) As an example, consider a centred projection on a planar wall 5, where the playout matrix 28 has a horizontal and a vertical symmetry and may be stored using only a quarter of the memory compared to an asymmetric case. When present, an offset vector O.sub.h, O.sub.v may also be stored in a memory saving format by exploiting a symmetry.

(58) So far, the image 4 was represented by one single intensity I.sub.i per pixel P.sub.x,y allowing only for a monochromatic, greyscale or black and white projection. In many applications a colour image 4 represented as a matrix 22 of pixels P.sub.x,y with individual intensities I.sub.i for each of two or more colours, e.g., RGB or YCbCr, etc., is to be projected. In this case, the light source 6 comprises for each of the colours a laser emitting a respective train S of monochromatic light pulses 3.sub.i with variable intensities I.sub.i and intervals Δt.sub.i and the MEMS mirror 7 deflects all of the emitted light pulses 3.sub.i. The retrieval of the durations d.sub.h, d.sub.v and intensities I.sub.i and the calculation of intervals Δt.sub.i is then performed by providing the blocks 23, 24, 30, 31, 32, the pulse generator 26 and the modulator 27 for each colour. The image area 15 may, e.g., be chosen the largest rectangular area of overlap of the respective projection areas 10 of all colours with overlapping grids 16.

(59) The disclosed subject matter is not restricted to the specific embodiments described above but encompasses all variants, modifications and combinations thereof that fall within the scope of the appended claims.