Abstract
A method and a device for time synchronisation of the optical transmission of data in free space from a transmitter to at least one receiver are discussed. In the method and the device, the time synchronisation between the image reproduction of a transmitter and the image recording of a camera is to be improved for the optical free space transmission between transmitter and camera. This is achieved in that a method and a device for time synchronisation of the optical transmission of data in free space from a transmitter to at least one receiver is provided, wherein an arbitrary time recording phase for an image recording by the camera is allowed, wherein an arbitrary recording start time for the image recording by the camera is allowed, an arbitrary implementation of an updating of the image content, typically line by line with a time offset, is performed for the imaging device, an arbitrary implementation of a readout timing and thus coherent shutter timings of the receiving camera is allowed, and a synchronisation is performed subsequently, on the basis of the image recordings not recorded in synchronised fashion.
Claims
1. A method for time synchronisation of optical transmission of data in free space from a transmitter to at least one receiver, wherein the transmitter is an imaging device with a plurality of pixels, which are controlled by image data (s(i, j, m)) of an image sequence (s.sub.n(i, j)) that represent images, and the data provided for transmission are divided into a data sequence (d.sub.n(i, j)) from a data packet comprising a number of data bits and each data bit is associated with at least one of the pixels, and wherein the data packets are each modulated onto the image data (s(i, j, m)) of at least one image of the image sequence (s.sub.n(i, j)) in pixel-based fashion in the form of an amplitude modulation, wherein the modulation of the image data (s(i, j, m)) is performed with a time difference, the image sequence (s.sub.n(i, j)) is formed from image pairs of successive images, and wherein the data sequence (d.sub.n(i, j)) is formed from pixel-based data packets (d(i, j, m)), of which each second data packet to be superposed (d(i, j, 2m+1)) corresponds to the preceding first pixel-based data packet to be superposed (d(i, j, 2m)) with reversed sign, and wherein each first pixel-based data packet to be superposed (d(i, j, 2m)) is superposed with each first image of an image pair and each second pixel-based data packet (d(i, j, 2m+1)) to be superposed is superposed with each second image of the image pair, and a difference image is generated in the receiver for recovery of the data sequence (d.sub.n(i, j)) from each image pair by subtraction of the second image of an image pair from the first image of the same image pair pixel by pixel, the receiver comprises a camera capable of recording video to be arranged at a distance from the imaging device, wherein the camera receives the image sequence (s.sub.n(i, j)) sent by the imaging device wherein the method includes making an unsynchronized image recording of the image sequence by the camera at a random recording start time, and performing a synchronisation after the image recording based on the unsynchronized image recording which is not synchronized in time.
2. The method according to claim 1, wherein the image sequence (s.sub.n(i, j)) is an image sequence of successive image pairs of identical images each with identical image data (s(i, j, m)).
3. The method according to claim 1, wherein the camera capable of recording video, for an image recording of the camera, has an image recording rate f.sub.C and an exposure time, the imaging device, for an image reproduction of the imaging device, has an image reproduction rate f.sub.D and a mean reaction time t.sub.R, and the exposure time t.sub.exp and the image recording rate f.sub.C are set in accordance with the following mathematical relation:
4. The method according to claim 3, wherein all image recordings of the camera are divided in the vertical image direction into n.sub.v image sections of equal size, wherein the time synchronisation between the image reproduction of the imaging device and the image recording of the camera is performed separately for each vertical image section.
5. The method according to claim 4, wherein a time synchronisation between the image reproduction of the imaging device and the image recording of the camera is attained in that, in the receiver, after the image recording, a reconstruction of the transmitter-side image sequence (s.sub.n(i, j)) is performed on the basis of the image recordings by performing a separate allocation of the image sections within the image recordings of the camera to the individual images of the transmitter-side image sequence (s.sub.n(i, j)).
6. The method according to claim 5, wherein the section-by-section reconstruction of the images of the image sequence (s.sub.n(i, j)) displayed by the imaging device is performed on the basis of the image recordings in the receiver in that the image pairs of the transmitter-side image sequence (s.sub.n(i, j)) within the image recordings are searched image section by image section in that, in each vertical image section, all difference image sections from all pairs of image sections potentially relevant for the generation of a difference image section are produced from the chronologically successive image recordings and compared with one another, wherein the difference image section which, in relation to a mean signal level, has a strongest data signal created by the modulation of the pixel-based data packet (d(i, j, m)), is used within the particular vertical image section for the subsequent generation of a difference image.
7. The method according to claim 6, wherein a comparison criterion for the comparison of the potential difference image portions produced section by section is defined, wherein for this purpose the sum of the amounts of all pixel values B.sub.sum(k.sub.X,k.sub.Y,n) of the particular difference image section n, which was produced from the image recordings k.sub.X and k.sub.Y by pixel-based subtraction, is used in accordance with the following formula: Here: k.sub.X, k.sub.Y image recording index no. X, Y n running index of the vertical image sections, n=1 . . . n.sub.v p, q image sensor pixel coordinates of the image recordings W, H image width W and image height H of the image recordings in image sensor pixel coordinates modCh modulated colour channel of the image recordings, wherein a modulated colour channel is a colour channel within a colour space which was used on the transmitter side for the pixel-based modulation of the data sequence r.sub.modCh(p, q, I.sub.X) intensity value, wherein what is meant here is the height of the amplitude of the digital representation of an image, of the colour channel modCH at the image sensor coordinates (p, q) in image recording X.
8. The method according to claim 7, wherein an extended comparison criterion is defined, in which B.sub.sum(k.sub.X,k.sub.Y,n) a calculation is made in each case for a modulated colour channel and for a colour channel not used for the data modulation, that is to say a non-modulated colour channel, wherein a modulated colour channel is a colour channel within a colour space which was used on the transmitter side for the pixel-based modulation of the data sequence, wherein the extended comparison criterion is then calculated as follows:
9. The method according to claim 1, wherein the same pixel-based data packet (d(i, j, 2m)) is superposed with each image pair.
10. The method according to claim 9, wherein a group of bs×bs pixels of the imaging device are combined to form a data block which is modulated by the same data bit of the data sequence d.sub.n(i, j), wherein a transmitter-side local filtering of each pixel-based data packet (d(i, j, m)) is additionally performed with the following filter matrix:
11. The method according to claim 10, wherein from the amount of provided image recordings of the camera, the difference image that is the same in respect of the superposed pixel-based data packet (d(i, j, m)) is produced more than once in the receiver, and the resultant difference images that are the same in respect of the pixel-based data packet (d(i, j, m)) are summed pixel by pixel, thus resulting in an improved signal-to-noise ratio as compared to an individual difference image.
12. The method according to claim 11, wherein each image recording of the camera or each of the produced individual difference images in the receiver is subjected before the further processing to a local filtering with the same filter matrix with which each pixel-based data packet (d(i, j, m)) was filtered locally on the transmitter side.
13. The method according to claim 12, wherein the produced and locally filtered difference images are summed inclusive of sign, wherein, from the second produced difference image, the new difference image to be summed is additively superposed by way of trial with the first difference image or the previous difference images already summed at this point, once with positive and once with negative pixel-based sign, and then the sign with which the data signal contained in the summed difference image and created by the transmitter-side modulation of the pixel-based data packet (d(i, j, m)) is strongest is determined in a comparison of the two difference images summed by way of trial.
14. The method according to claim 13, wherein for the comparison of the strength of the data signal contained in the difference images summed with positive and negative sign by way of trial, the sums of the amounts of the intensity values, wherein what is meant here is the height of the amplitude of the digital representation of the summed difference image, of all pixels of the resultant difference images are compared with one another.
15. The method according to claim 14, wherein each image recording of the camera or each of the produced difference images in the receiver is divided into n.sub.v vertical image sections of equal size, wherein the summing, inclusive of sign, of the produced difference images is performed separately for each vertical image section.
16. The method according to claim 15, wherein following the section-based summing, inclusive of sign, of the difference images, the possibly different signs of the individual image sections, if necessary, are matched to one another by comparing the boundaries between the image sections that are potentially to be matched in respect of the sign, wherein a high negative correlation indicates a sign difference and a high positive correlation indicates a sign uniformity between two adjacent sections.
17. The method according to claim 16, wherein each three chronologically successive images of the imaging device are combined to form an image group and are modulated, wherein in each case the first image is modulated with positive sign, the second image is not modulated, and the third image is modulated with negative sign.
18. The method according to claim 14, wherein each of the produced difference images in the receiver is divided into n.sub.v vertical image sections of equal size and into n.sub.h horizontal image sections of equal size, whereby a two-dimensional arrangement of n.sub.v×n.sub.h image sections is created, wherein the summing, inclusive of sign, is performed separately for each image section.
19. A device for carrying out the method according to claim 1.
Description
DRAWINGS
(1) The invention will be explained in greater detail hereinafter on the basis of preferred embodiments with reference to the accompanying drawing. The shown features may represent an aspect of the invention individually or in combination. Features of various exemplary embodiments are transferable from one exemplary embodiment to another.
(2) In the drawing
(3) FIG. 1 shows an arrangement of transmitter and receiver for optical transmission of data in free space,
(4) FIG. 2 shows a temporal display sequence of the modulation of pixel-based data packets 1, 2 and 3 with use of the time differential modulation with image content repetition in accordance with a first exemplary embodiment of the invention,
(5) FIG. 3 shows a temporal display sequence of the modulation of pixel-based data packets with use of the time differential modulation without image content repetition in accordance with a second exemplary embodiment of the invention,
(6) FIG. 4 shows the space-time modelling of the time-space reproduction behaviour of typical devices, wherein FIG. 4a) relates to LCD displays, and FIG. 4b) shows the space-time modelling of the time-space reproduction behaviour of typical rolling-shutter CMOS cameras,
(7) FIG. 5 shows the space-time modelling of an image reproduction and an image recording with identical image reproduction and image recording rate f.sub.C=f.sub.D for three different random phase positions of the image recording, wherein the scanning direction of the line-by-line scanning behaviour of display and camera in FIGS. 5a) and b) is the same and is opposite in FIG. 5c).
(8) FIG. 6 shows the space-time modelling of an image reproduction in accordance with a first exemplary embodiment of the invention and an image recording for f.sub.C>f.sub.D,
(9) FIG. 7 shows the space-time modelling of an image reproduction in accordance with a first exemplary embodiment of the invention and an image recording for f.sub.C>f.sub.D with opposite scanning behaviour of display and camera,
(10) FIG. 8 shows the space-time modelling of an image reproduction in accordance with a first exemplary embodiment of the invention and an image recording for f.sub.C=f.sub.D once with identical scanning direction and once with opposite scanning direction of the line-by-line scanning behaviour of display and camera,
(11) FIG. 9 shows the division of a full HD image recording into n.sub.v=10 vertical spatial image sections of equal size,
(12) FIG. 10 shows the two-dimensional division of a full HD image recording into n.sub.v=5 by n.sub.n=8 spatial image sections of equal size,
(13) FIG. 11 shows the space-time modelling of an image reproduction in accordance with a first exemplary embodiment of the invention and an image recording for f.sub.C>f.sub.D with an exemplary division of the image recordings into n.sub.v=5 vertical spatial image sections,
(14) FIG. 12 shows the space-time modelling of an image reproduction in accordance with a second exemplary embodiment of the invention and an image recording for f.sub.C=f.sub.D,
(15) FIG. 13 shows a signal flow diagram of the receiver according to a second exemplary embodiment of the invention,
(16) FIG. 14 shows the space-time modelling of an image reproduction in accordance with a second exemplary embodiment of the invention and an image recording for f.sub.C>f.sub.D with an exemplary division of the image recordings into n.sub.v=5 vertical spatial image sections,
(17) FIG. 15 shows an alternative temporal display sequence of the modulation of a pixel-based data packet with use of the time differential modulation without image content repetition in accordance with a further exemplary embodiment of the invention.
DETAILED DESCRIPTION
(18) FIG. 1 shows an arrangement for the optical transmission of data in free space from a transmitter 1 to a receiver 2. The transmitter 1 is an imaging device 1 with a plurality of image points 3, here in the form of a television with flat screen. The receiver 2 is a mobile terminal for telecommunication, for example a smartphone with integrated camera 4 for recording the transmitted data. Data to be transmitted from the transmitter 1 to the receiver 2 is an image signal which is visualised on the television 1 and usually contains moving images, i.e. superposed data. The television 1 thus receives additional data besides the conventional image signal.
(19) The data to be transmitted may be useful data or control data. For example, photos or music items whilst watching television may thus be downloaded from the Internet or a local network and transmitted via the free space transmission according to the invention from the television 1 to the terminal 2. Control data may also be transmitted, for example by a bus of the building automation system, from the television to the terminal and in this way any devices such as lights, heating, blinds, projectors, etc. may be controlled.
(20) A signal processor in the television 1 modulates the data onto the television image such that the data is not visible to a viewer. The camera 4 in the smartphone 2 records the modulated television image. A processor with implemented demodulator and decoder performs a demodulation and decoding to recover the data from the television image and provides the data for the application on the terminal 2. The user merely has to align the camera 4 roughly for this. The receiver 2, however, does not have to be a mobile terminal. It may also be arranged in stationary fashion in the room.
(21) FIG. 2 shows a temporal display sequence of six successive images of an image sequence. Three pixel-based data packets d(i, j, m) are transmitted differentially within the shown time portion, wherein the time differential modulation is used with image content repetition in accordance with a first exemplary embodiment of the invention. In the time differential modulation according to a first exemplary embodiment of the invention an image and data repetition is implemented in the time direction, which means that each individual image of the original video sequence is repeated once. Two identical, successive individual images of the video are combined to form an image pair 5. A pixel-based data packet d(i, j, m) is superposed on the first image of an image pair 5 with positive sign and is superposed on the second image of the same image pair 5 with negative sign. For recovery of the data, the difference from the recordings of the two modulated image pairs 5 is produced in the receiver 2. In order to increase the data throughput, a new pixel-based data packet d(i, j, m) is allocated to each image pair 5 of the image sequence.
(22) FIG. 3 shows a temporal display sequence of the modulation of a pixel-based data packet d(i, j, m) with use of the time differential modulation without image content repetition in accordance with a second exemplary embodiment of the invention. In the second exemplary embodiment time differential modulation is used, just as in the first exemplary embodiment. The main difference is that only a data packet of limited size is transmitted, rather than a continuous data stream. Proceeding from FIG. 2, the same pixel-based data packet d(i, j, m) is therefore superposed with each image pair 5. The concept provides that only a small data amount—for example an Internet link—will be transmitted. By utilising the redundancy in the time direction—the camera receives the same pixel-based data packet d(i, j, m) more than once—the transmission may be robust even with use of very small modulation amplitudes. Furthermore, in this second exemplary embodiment there is no repetition of the image content. The method should hereby function also with a possible limitation of the maximum image reproduction rate to 25 or 30 Hz. The temporal repetition of the image content would otherwise limit the image content reproduction to 12.5 or 15 Hz, which is inadequate. FIG. 3 shows the temporal display sequence in accordance with the second exemplary embodiment without image content repetition. It can be seen that the pixel-based data packet d(i, j, m) may be produced theoretically from any difference of any successive display individual images without consideration of image content changes in the receiver 2, wherein the sign of the difference image is inverted in each second data pair 7. By summing a plurality of difference images of the same data pair 6 or 7, the signal-to-noise ratio of the difference image may be improved, whereby the robustness of the transmission increases with the number of image recordings 12 used for production of a difference image. The objective of the time synchronisation in the second exemplary embodiment is to produce a difference image by the summing, inclusive of sign, of a plurality of individual difference images, wherein it should be taken into consideration that the image content of the image sequence displayed in parallel on the imaging device may change in each image.
(23) FIG. 4a) shows the modelling 10 of the space-time reproduction behaviour of typical LCD displays. It is known that LCD or OLED screens (both TV devices and computer monitors) have a rectangular aperture in the time direction. Here, the image content is typically maintained over the entire possible display duration of an individual image (for example 20 ms at 50 Hz). A possible pulse width modulation of the LED background light in LCD screens or a black frame insertion in OLED screens is omitted at this juncture. Here, however, the image content at the transition 8 from one individual image 16 to the next during the reproduction of a video is updated generally not time-synchronously for all pixels, but line by line with a time offset 14. FIG. 4a) shows this time-space reproduction behaviour by way of example on the basis of an image reproduction with 60 images per second, corresponding to an image display duration of 16.67 ms. The horizontal axis in this depiction represents the time dimension. The vertical axis represents the vertical spatial dimension of the display, i.e. the fact that the origin of y represents the uppermost line of the display. The regions characterised by different levels of brightness each represent an image of the image sequence 16. The time difference t.sub.Scan between the update of the first line and the update of the last line in this case is approximately 7.5 ms, and the mean reaction time of the LCD display modelled here by way of example is t.sub.R=3 ms, which by implication means that the image content of an individual image is displayed fully, i.e. simultaneously by all pixels, not for t.sub.D=16.67 ms, but instead for a duration of t.sub.full=6.17 ms. It should be noted at this juncture that this scanning behaviour 14 of displays is not implemented consistently, varies from device to device. Measurements in a laboratory have revealed that displays exist in which t.sub.Scan is only slightly smaller than t.sub.D, and so the entire image content of an individual image 16 is practically never shown by all lines of the display. When measuring another display, it was found that the scanning behaviour 14 was implemented in an opposite direction, which means that the line-based updating starts with the lowermost line, and the uppermost line is updated last.
(24) FIG. 4b) shows the space-time modelling of the image recording 11 of a CMOS camera, wherein 4 individual images of the image recording 12 are shown. The horizontal axis in this depiction represents the time dimension. The vertical axis represents the vertical spatial dimension 9 of the camera sensor or the image recording. It is known that such a space-time scanning behaviour is also widespread in CMOS camera sensors. Reference is made in this case to a rolling-shutter sensor. In rolling-shutter sensors the light-active elements of the camera sensor are not read and exposed 15 simultaneously, but line-by-line with a time offset, which leads to the known rolling-shutter effect when recording motion. The time offset between the exposure of successive sensor lines may vary here—similarly to the time offset of the line-based updating of the display reproduction 14. By contrast, global-shutter sensors exist, which allow a simultaneous exposure of all light-active sensor elements. However, the use of rolling-shutter camera sensors is very widespread for smartphone cameras.
(25) FIG. 5 shows by way of example the space-time modelling 10 and 11 of different reproduction-recording combinations which may result when recording a 60 Hz image reproduction 10 with a rolling-shutter camera with an image recording rate of f.sub.C=60 images per second and an exposure time of t.sub.exp≈3 ms 3 without phase synchronisation—that is to say with random recording phase 13. 5a) and 5b) show cases in which the orientation of the scanning direction of camera and display are the same. Due to the random recording start time of the camera, in the scenario shown in FIG. 5b) there is a blending of image contents within the image recordings 12, and resultant image transitions 17 of the image sequence shown on the transmitter side within the image recordings 12. By contrast, the image recording 11 in scenario a) is sufficiently phase-synchronous, so as to ensure an error-free image recording 11 with the shown combination of display and camera and the selected exposure time t.sub.exp. FIG. 5c) shows the case that no phase synchronisation can be achieved on account of the opposite orientation of the line-based scanning of display and camera.
(26) FIG. 6 shows a space-time modelling of an image reproduction 10 in accordance with a first exemplary embodiment of the invention and an image recording 11 for f.sub.C>f.sub.D, The purpose of the synchronisation algorithm in this first exemplary embodiment is to realise a section-based image pair allocation, in which the individual images of an image pair 5 may be situated in different image recordings 12. FIG. 6 illustrates the principle. The image reproduction 10 of the display in this case occurs with an image reproduction rate of f.sub.D=50 Hz, whereas the camera records with f.sub.C=60 Hz. The reaction time of the display is negligibly short. Within the shown section, for individual images are displayed by the display (1+, 1−, 2+, 2−), with the aid of which two difference images ((1+)−(1−), ((2+)−(2−)) may be produced. The individual image recordings of the camera 12 are denoted by capital letters. It can be seen that the upper 20% of the first difference image in the optimal case are produced from image recordings A and C, whereas the lower 80% may be composed of recording A and recording B (partition line i). In order to produce the second difference image, recording E may be used as subtrahend for the entire difference image, wherein recording D should be selected as minuend for the upper 60% and recording C should be selected as minuend for the lower 40% (partition line ii).
(27) The image recording rate f.sub.C and exposure time t.sub.exp of the camera is selected in the example shown in FIG. 6 in such a way that precisely one point is produced on the y-axis at the image transitions 17 within the image recordings 12, at which point a change of the image recordings for production of a difference image is expedient (dashed partition lines). An increase of the image recording rate f.sub.C of the camera with constant exposure time t.sub.exp or a reduction of the exposure time t.sub.exp with constant image recording rate f.sub.C would by contrast lead to a transition region in which two equivalent recordings of a display image exist. With a reduction of the image recording rate f.sub.C with constant exposure time t.sub.exp or an increase of the exposure time t.sub.exp with constant image recording rate f.sub.C, a transition region would by contrast be created in which there is no completely ISI-free recording of both individual images. The image recording rate f.sub.C and exposure time t.sub.exp selected in FIG. 6, with the given image reproduction rate f.sub.D consequently represent an optimal working point at which all images of the shown image sequence may be fully reconstructed. A mathematical relationship between the used image recording rate f.sub.C, the exposure time t.sub.exp, and the image reproduction rate of the display f.sub.D may be derived for this working point from the geometric conditions of the space-time modelling:
(28)
(29) The relationship may be described as follows. At a given display image reproduction rate f.sub.D and an exposure time t.sub.exp desired for the recording, the image recording rate f.sub.C must be at least as high as the value calculated according to (3) so that a completely ISI-free recording of each individual image is provided in the receiver. Conversely: At a given image reproduction rate t.sub.D and an image recording rate f.sub.C available in the receiver camera (f.sub.C>f.sub.D) the exposure time t.sub.exp must correspond at most to the difference of the reciprocal values of both frequencies (2). The exposure time t.sub.exp selected in the example from FIG. 6 corresponds to the maximum value at the selected frequencies (f.sub.D=50 Hz, f.sub.C=60 Hz) of 3.34 ms. The mathematical relationships shown at (1) and (2) and the resultant conclusions are also valid if the scanning behaviour of camera and display are opposite on account of an inverse orientation, wherein here the number of occurring image transitions 17 is higher. FIG. 7 shows the space-time modelling of an image reproduction 10 in accordance with a first exemplary embodiment of the invention and an image recording 11 for f.sub.C>f.sub.D with opposite scanning behaviour of display and camera,
(30) The equations (3) and (4) extend the relationships (1) and (2) by the mean display reaction time t.sub.R typical for LCDs, whereby with constant image recording rate f.sub.C the available exposure time t.sub.exp is reduced by t.sub.R or with constant exposure time t.sub.exp the necessary image recording rate f.sub.C of the camera rises. A shortest possible display reaction time is therefore advantageous for the method.
(31)
(32) FIG. 8 shows a space-time modelling of an image reproduction 10 in accordance with a first exemplary embodiment of the invention and an image recording 11 for f.sub.C=f.sub.D once with identical scanning direction and once with opposite scanning direction of the line-by-line scanning behaviour of display 14 and camera 15. In this case there is no oversampling according to (2) or (4), which means that transition regions 17 which in all image recordings 12 are located in the same vertical image region may be created depending on the relative recording phase 13. Similarly to the previously described examples, a linear transition 17 takes place within these regions between two individual images of the image sequence. Without oversampling, however, no ISI-free substitute recordings of the individual images in the transition region 17 exist in the receiver. The two adjacent individual images transition linearly into one another within the transition region 17. In the middle of the transition regions 17 denoted in the image by the auxiliary lines (at the height of the point of intersection of the red lines), the image recordings 12 are therefore composed in each case of exactly 50% of the adjacent individual images of the image sequence 16. In recording B and D this therefore leads in FIG. 8a) to a complete erasure of the superposed pixel-based data packet at said point, whereas in recording A and C there is a mixing of the adjacent mutually independent pixel-based data packets. Without a suitable oversampling by the camera, image regions may therefore be created in which the individual images are not fully provided, and therefore the difference images may not be generated correctly.
(33) In order to be able to perform a section-based allocation of the image recordings 12 of the camera, all (rectified and locally synchronised) image recordings 12 are divided in the vertical direction into n.sub.v (for example n.sub.v=10) vertical image sections 18 of equal size. This is shown in FIG. 9.
(34) FIG. 10 shows the two-dimensional division of an exemplary full HD image recording 12 into n.sub.v=5 vertical image sections 18 and n.sub.h=8 horizontal image sections 24, which may be used in a second exemplary embodiment of the invention.
(35) It is firstly assumed that the background image material is not a video, but a static image. The purpose of the algorithm is to find successively all possible image pair allocations within the image recordings 12 and to assemble the difference images on the basis of the allocation table. To this end, all eligible differences for each vertical image section 18 are produced and evaluated. Here, it is firstly sufficient for the evaluation to use just one modulated colour channel. With the modulation of the colour difference channels U and V of the YUV colour space, the U or the V channel could consequently be used. The evaluation consists fundamentally in that, for each section n all differences of the modulated colour channel modCh eligible for an allocation are produced and in each case the mean figure of the difference pixel values B.sub.sum,modCh(k.sub.X,k.sub.Y,n) is formed (6). A size comparison of B.sub.sum,modCh(k.sub.X,k.sub.Y,n) for all eligible combinations from the image recordings k.sub.X and k.sub.Y within a section n delivers the correct allocation for this section.
(36)
Here: k.sub.X, k.sub.Y image recording index no. X, Y N running index of the vertical image sections 18, n=1 . . . n.sub.v p, q image sensor pixel coordinates of the image recordings 12 W, H image width W and image height H of the image recordings 12 in image sensor pixel coordinates modCh modulated colour channel of the image recordings 12, wherein a modulated colour channel is a colour channel within a colour space which was used on the transmitter side for the pixel-based modulation of the data sequence f.sub.modCh(p, q, I.sub.X) intensity value, wherein what is meant here is the height of the amplitude of the digital representation of an image, of the colour channel modCH at the image sensor coordinates (p, q) in image recording X
(37) Without consideration of the non-linear reproduction behaviour of displays (gamma curve) and on the assumption of an ideal imaging of the display on the camera sensor, three possible situations may occur in principle when calculating B.sub.sum,modCh(k.sub.X,k.sub.Y,n) (see Table 2).
(38) In the first case the image recordings k.sub.X and k.sub.Y within the vertical image section n contain recordings of two individual images which belong to an image pair 5. For example, these could be sections of the images of image sequence 16 with the modulated data +1 and −1. The forming of the difference, depending on the data, results in the desired amplitudes 2 A (logic “1”) or −2 A (logic “0”). The mean value of the amplitudes B.sub.sum,modCh(k.sub.X,k.sub.Y,n) is consequently 2 A.
(39) In the second case the image recordings k.sub.X and k.sub.Y within the vertical image section contain recordings n of two individual images which contain mutually independent pixel-based data packets (Example: −1 and +2). On the assumption of equally distributed data bits, the amplitudes are erased here in half the cases. As a result, the three possible difference amplitudes 2 A, −2 A and 0 are created with the occurrence likelihoods specified in Table 2. The mean value of the amplitudes B.sub.sum,modCh(k.sub.X,k.sub.Y,n) in this case assumes the value A and is thus half as big as compared to case 1.
(40) In the third case the image recordings k.sub.X and k.sub.Y within the vertical image section n contain recordings of the same individual images. This case may occur only with an oversampling, that is to say if the image recording rate f.sub.C is greater than the image reproduction rate f.sub.D. Due to the same superposed pixel-based data packet and the associated identical modulation in both recordings, the pixel-based data packet is erased as the difference is formed. Consequently: B.sub.sum,modCh(k.sub.X,k.sub.Y,n)=0.
(41) Case 1 corresponds to correct image pair allocation. For evaluation, a size comparison of B.sub.sum,modCh(k.sub.X,k.sub.Y,n) may thus be performed for all eligible differences within a section n, since B.sub.sum,modCh(k.sub.X,k.sub.Y,n) in case 1 is always larger than in case 2 or 3. It should be noted that, on account of the exposure time by the camera, one of the three cases does not necessarily occur, and instead there are often combinations of cases within a section. An oversampling according to (3), however, means that case 1 exists for each image line.
(42) An exemplary sequence of the algorithm to be evaluated will be explained hereinafter on the basis of an example. To this end, FIG. 11 again shows the space-time modelling of the reproduction-recording combination known from FIG. 6, wherein f.sub.C=60 images per second are recorded by a rolling-shutter camera at an image reproduction rate 10 of f.sub.D=50 Hz. The individual image recordings of the image recording 12 are therefore divided in the shown example of the depiction into only n.sub.v=5 vertical image sections 18 (i, ii, . . . v). The production of the first difference image shall be reproduced and may be produced from the differences of images 1+ and 1− in the image sequence.
(43) The algorithm performs the image pair allocation section by section, wherein at the start the third or fourth image recording (here recording C) is used as reference, rather than the first recording. Due to the possible oversampling (f.sub.C>f.sub.D), both the directly adjacent image recordings B and D and the image recordings A and E are eligible for producing a difference image for image recording C. In the first step, B.sub.sum,modCh(k.sub.X,k.sub.Y,n) is therefore calculated for the combinations k.sub.X/k.sub.Y=A/C, B/C, C/D and C/E. By way of a size comparison, the combination A/C wins, since here a clearer difference according to case 1 is produced. C/D and C/E correspond precisely to case 2, whereas in B/C there is a mixture of cases 1 and 3. The image recording B may also contain an image recording of the display image 1−; the winner A/C in the next step is compared with the previously unconsidered combination A/B. In the present case, however, an improved difference image may be produced in section i by the combination A/C, and therefore this combination is stored in the allocation table (Table 1) for n=1. In section ii the same procedure is repeated, wherein here in the first comparison (A/C, B/C, C/D, C/E) the combination A/C is also strongest. In the second comparison (A/C, A/B), A/B by contrast wins, as can be seen with reference to FIG. 11. On account of the final result A/B from section ii, image recording B rather than image recording C is used as new reference image recording in section 3. A change of reference image recording is then always implemented when the previous reference image recording in the previous section is not part of the final allocation. In this way a phase-false formation of the difference images is counteracted. If the algorithm ii by contrast were continued with image recording C, the difference image 2 would already be produced in section iii-v. In section iii the combinations A/B, B/C and B/D are thus examined, wherein the combination A/B wins in all remaining sections (iii-v). The first difference image may thus be produced in accordance with the allocation in Table 3 from a combination of the image recordings A, B and C. The second difference image is composed of the image recordings C, D and E. The difference images may then be produced in all modulated colour channels on the basis of the allocation table.
(44) TABLE-US-00001 TABLE 1 Section-based allocation of the image recordings A to E according to FIG. 11 N 1 2 3 4 5 Difference image 1 A-C A-B A-B A-B A-B Difference image 2 D-E D-E D-E C-E C-E
(45) The mean value of the amplitudes B.sub.sum,modCh(k.sub.X,k.sub.Y,n) of a modulated channel is suitable, however—as described—only as a comparison criterion, provided the modulated image material does not contain any movements, or only very slight movements. This is substantiated in the fact that, with image content changes, a difference image according to case 2 (difference from independent pixel-based data packets and any deviating image content) contains not only the useful signal amplitudes (2 A, −2 A and 0) but also difference amplitudes caused by image content changes. In case 1 or 3 this does not occur due to the image content repetition of the time differential modulation. In the event of excessive movement, the mean value of the amplitudes in case 2, due to the image content differences, may assume a higher value than the mean value of the amplitudes in case 1. Consequently, an incorrect decision will be made, which prevents correct difference image production. The comparison criterion (5) therefore must be extended sensibly so that movements of the background video are not taken into consideration or the criterion in this case is sufficiently weakened by the Straf factor. In particular, the inclusion of a second, non-modulated colour channel has proven its worth here. On the assumption that movements in natural image sequences have correlations between the image channels, the reciprocal value of the mean value of an unmodulated colour channel may be included as weighting factor, whereby an extended comparison criterion may be defined:
(46)
(47) It is thus assumed that image contents with movements in a colour channel, for example in a modulated colour channel such as the U or V channel of the YUV colour space, typically also include movements in another colour channel, such as a non-modulated channel, for example the Y channel of the YUV colour space. A mean difference amplitude increased by image content changes in a modulated colour channel will thus be sufficiently weakened by means of division by
(48)
so as to prevent an incorrect decision.
(49) FIG. 12 shows the space-time modelling of an image reproduction 10 in accordance with a second exemplary embodiment of the invention and an image recording 11 for f.sub.C=f.sub.D. In accordance with the second exemplary embodiment of the invention, merely one pixel-based data packet per modulated colour channel is transmitted, which means that only one difference image per transmission channel also has to be produced in the receiver. By recording a greater number of image recordings 12 (for example 20) by the camera, however, the same difference image—in respect of the superposed pixel-based data packet—may be produced more than once. Since the noise added by the camera sensor (primarily shot noise) does not correlate between different recordings, a difference image that is improved in respect of the signal-to-noise ratio may be produced by a building superpositioning of the individual difference images.
(50) Similarly to a first exemplary embodiment of the invention, in a second exemplary embodiment of the invention as well it is assumed that the image recording rate f.sub.C of the camera corresponds approximately to the image reproduction rate f.sub.D of the display. Here, however, as in a first exemplary embodiment of the invention, it is advantageous if both rates are not identical, so that any image content transitions 17 occurring in the image recordings 12 are not located in each recording in the same vertical region. For the sake of simplicity it is firstly assumed—as shown in FIG. 12—that image recording 11 and image reproduction 10 synchronous. In accordance with the second exemplary embodiment of the invention the same pixel-based data packet is superposed with each individual image with sign alternating from image to image.
(51) FIG. 13 shows a signal flow diagram of the receiver according to a second exemplary embodiment of the invention and represents an exemplary sequence of the algorithm for summing a plurality of difference images. Firstly, all difference images D.sub.X−Y of successive image recordings (X−Y=A−B, C−D, D−E, etc.) are produced 19. The produced difference images are then filtered locally using the filter matrix 20. The first difference image D.sub.A−B is by definition the starting difference image, wherein the sign of D.sub.A−B is unknown. Whereas the correct difference image is produced by the difference formation of +1 and −1 (correct sign), the difference formation of −1 and +1 leads to the inverted difference image (incorrect sign). In the case shown in FIG. 12 the sign of D.sub.A−B is correct. Instead, D.sub.B−C would be the starting difference image, and the sign would be incorrect. The following difference images (D.sub.B−C, D.sub.C−D, etc.) are added successively to D.sub.A−B. So as to be able to ensure a building superpositioning and thus an improvement of the difference image, the sign of the difference image to be added must match the sign of the starting difference image. In order to achieve this, the difference image to be added is added once and subtracted once by way of trial 21. It is then determined which operation led to a consolidation of the difference image. To this end the mean value B.sub.sum,modCh(k.sub.X,k.sub.Y,n) defined in equation (6) may be used as comparison criterion 22, since this is greater in the case of a building superpositioning than in the case of a destructive superpositioning. Due to the unknown sign of the starting differential image D.sub.A−B, the sign of the accumulated difference image D.sub.akk is also unknown. There are thus two different possible difference images, which must be taken into consideration in the following decoding steps.
(52) In the case shown in FIG. 12, in which the image recording rate f.sub.C corresponds to the image reproduction rate, the size comparison is not necessary in principle for correct accumulation, since the sign alternates from difference image to difference image (A−B, B−C, etc.). With allowance of an image recording rate f.sub.C deviating from the image reproduction rate f.sub.D the comparison is required, however, since the sign change is not predictable in this case. In order to respond to any reproduction and recording timing combinations with image content transitions 17 within the image recordings 12, the summing algorithm is performed section by section, i.e. separately for each image section. The image recordings 12 are divided here into n.sub.v vertical image sections 18 and are processed separately. This is shown in FIG. 14. All processing steps within the dashed frame in FIG. 13 are performed in this case separately for each vertical image section 18n. The initial production of the individual difference images and the subsequent filtering may be performed beforehand on the basis of the complete image recordings 12. The separate execution of the summing, however, means that the signs of the sections do not have to be consistent by image transitions 17 in the image recordings 12. For example, with five sections, it may be that the sign in the first two sections is correct and in the remaining three sections is incorrect. In order to counteract this, the signs of the individual sections are then matched 23, wherein the resultant common sign may still be incorrect. After the sign matching, however, there are only 2 possible difference images instead of 2.sup.n.sup.v.
(53) FIG. 15 shows an alternative temporal display sequence of the modulation of a pixel-based data packet with use of the time differential modulation without image content repetition in accordance with a further exemplary embodiment of the invention. This alternative modulation schema may be used for example in the case in which the image recording rate f.sub.C of the camera is approximately half the image reproduction rate of the display. This may be expedient if the modulated image sequence is reproduced with, for example, 50 or 60 Hz (which reduces the visibility of the superposed pixel-based data packet in comparison to a reproduction with 25 or 30 Hz), however the camera during the image recording 11, with f.sub.C=25 Hz or f.sub.C=30 Hz, offers a higher image quality, because such an image recording is made possible without lossy source coding/compression. This modulation schema is shown in FIG. 15. As can be seen, three successive images are modulated here in accordance with the following schema: positive data modulation, no data modulation, negative data modulation. This means that, on average, ⅔ of the difference images used for a summing have only have the useful signal amplitude. Due to the significantly reduced visibility of the superposed pixel-based data packet, however, greater modulation amplitudes are possible with the same visibility.