METHOD AND DEVICE FOR REDUCING ALIASING ERRORS IN IMAGES OF PIXEL-BASED DISPLAY DEVICES AND FOR THE EVALUATION OF DISPLAY DEVICES OF THIS TYPE

20230281758 · 2023-09-07

    Inventors

    Cpc classification

    International classification

    Abstract

    The invention relates to a method for reducing aliasing errors in a moire-corrected final image formed by at least one camera image, wherein the at least one camera image is captured, by a camera (1) having imaging optics (1.1) and a sensor surface (1.2) with sensor pixels (1.3), as the depiction of a display image of a display device (2) with display pixels (2.1) arranged in a matrix-like manner and spaced apart in a display pixel pitch (D A) on the sensor surface (1.2), wherein, during the capture, the camera (1) is shifted relative to the display device (2) along at least one offset path (VP) in relation to a starting position (S). The invention also relates to a method for evaluating the presentation quality of a pixel-based display device (2), and a device for carrying out methods of this type.

    Claims

    1. A method for suppressing aliasing errors in a Moiré-corrected result image formed from at least one camera image, wherein by means of a camera comprising imaging optics and a sensor surface with sensor pixels the at least one camera image is recorded as an image of a display image of a display device with having display pixels arranged in a matrix-like manner and spaced apart at a display pixel pitch onto the sensor surface, wherein, during the recording of at least one camera image, at least one of the camera and/or and the sensor surface is moved relative to the display device along at least one offset path starting from a start position assigned to the respective camera image, wherein a first, a second and at least one of a third and a further camera image are recorded, wherein the first and the second camera image are recorded as a respective image of structurally identical display images displayed by the same display device, during the recording of the first and the at least one third or further camera image, the camera is one of not moved and moved only slightly relative to the display device, during the recording of the second camera image, the camera is moved relative to the display device along at least one offset path, a first and a second magnitude response of respective Fourier transforms of the first and the second camera image are determined, the amplitude response of an offset filter is determined from the first and the second magnitude response, for the at least one of the third and the further camera image, the Fourier transform is determined in each case and multiplied in each case by the amplitude response of the offset filter and, from this a Moiré-corrected result image is determined by inverse Fourier transformation.

    2. The method according to claim 1, wherein the camera is moved relative to the display device along at least one offset path starting from a start position assigned to the respective camera image during the recording of at least one camera image.

    3. The method according to claim 1, wherein the first and second camera images are recorded from a first display device and each further camera image is recorded from a further display device, which is in each case identical in construction to the first display device and is arranged relative to the camera in the same position as the first display device.

    4. The method according to claim 1, wherein a binary amplitude response is calculated from the quotient of the second magnitude response with respect to the first magnitude response by assigning the value 0 to the binary amplitude response if the quotient is below a predetermined threshold value, and by assigning the value 1 to the binary amplitude response if the quotient is above a predetermined threshold value or is equal to the predetermined threshold value.

    5. The method according to claim 1, wherein the offset made along an offset path parallel to the sensor surface is at most five times the display pixel pitch (AO.

    6. The method according to claim 1, wherein in a teach-in step an offset amplitude matching a display device and a camera is determined such that at least one of the following conditions applies: a Moiré interference measure is below a predetermined Moiré threshold value, and a Moiré interference is minimized when the camera is moved around an offset path with an offset that is smaller than or equal to the offset amplitude during the recording of the at least one camera image.

    7. The method according to claim 1, wherein in a teach-in step at least one offset path matching a display device and a camera is determined in such a way that at least one of the following conditions applies: a Moiré interference measure lies below a predetermined Moiré threshold value, and Moiré interference measure is minimized when the camera is moved along this offset path relative to the start position during the recording of at least one camera image.

    8. The method according to claim 1, wherein for at least one camera image the camera is displaced perpendicularly to the display device.

    9. The method according to claim 1, wherein at least one camera image is moved relative to a display image at least along a first offset path and a second offset path arranged symmetrically thereto.

    10. The method according to claim 1, wherein a color-channel-related result image is determined for a plurality of color channels in each case by arranging a color filter between the display device and the sensor surface of the camera, a color-channel-related offset interval being determined for each color channel, and the color-channel-related result images being registered against one another.

    11. A method for evaluating the display quality of a pixel-based display device, wherein a Moiré-corrected result image is formed from at least one camera image recorded with a camera using the method according to claim 1, and the display quality is evaluated on the basis of the Moiré-corrected result image.

    12. The method according to claim 11, wherein defective display pixels of the display device are at least one of detected and/or localized.

    13. The method according to claim 11, wherein the local distribution of a photometric parameter across the display device is determined.

    14. A device for carrying out a method according to claim 1, comprising a camera, a positioning unit, a control unit and an evaluation unit, wherein the positioning unit is set up to move the camera by an offset lying within the offset interval, and wherein the control unit is set up to trigger the recording of a camera image, and wherein the evaluation unit is set up to form a Moiré-corrected result image from at least one camera image using the method according to claim 1.

    15. The device according to claim 14, wherein the control unit is set up for controlling the positioning unit and for triggering the recording of a camera image in a manner coordinated with the movement of the positioning unit.

    16. The device according to claim 14, wherein the control unit and the evaluation unit are each designed as part of a control and evaluation unit.

    17. The device according to claim 14, wherein the positioning unit is designed as a vibration unit and is arranged on a housing of the camera.

    18. The device according to claim 14, wherein the camera is designed as a luminance measurement camera.

    19. The method according to claim 13, wherein the photometric parameter is the luminance.

    Description

    [0137] Exemplary embodiments of the invention are explained in more detail below with reference to drawings.

    [0138] FIG. 1 schematically shows an arrangement for taking display images of a display device,

    [0139] FIGS. 2A and 2B schematically show arrangements of sensor pixels and display pixels,

    [0140] FIGS. 3A and 3B schematically show the magnitude of Fourier transforms of a first and a second camera image and

    [0141] FIG. 4 schematically shows the magnitude response of an amplitude response of an offset filter.

    [0142] Corresponding parts are provided with the same reference signs in all figures.

    [0143] FIG. 1 schematically shows an arrangement for recording a display image displayed by a display device 2 by means of a camera 1. The display device 2 comprises a plurality of display pixels 2.1 arranged in a matrix-like manner along rows 2.3 and columns 2.2. The rows 2.3 run approximately parallel to a horizontal x-direction. The columns 2.2 run approximately parallel to a vertical y-direction.

    [0144] The camera 1 comprises sensor pixels 1.3 arranged in a matrix-like manner in a sensor surface 1.2. A camera lens 1.1 is arranged in front of the sensor surface 1.2, which is arranged to image the display image represented by the display pixels 2.1 onto the sensor surface 1.2. The optical axis O of the camera 1 is directed approximately perpendicularly and centrally to the surface drawn by the display pixels 2.1.

    [0145] The camera 1 is arranged in its entirety (comprising the camera lens 1.1 and the sensor surface 1.2 with the sensor pixels 1.3) on a holding plate 3.2 of a positioning unit 3, for example clamped or screwed. The positioning unit 3 is set up for motorized movement of the holding plate 3.2 along the horizontal x-direction and along the vertical y-direction by means of motors 3.1. Instead of motors, other movement elements, for example piezo-electric actuators, can also be used for the linear movement along the x and y directions.

    [0146] By means of the positioning unit 3, the camera 1 can be moved by an offset V relative to the display device 2 along an offset path VP. The offset V denotes the length of the offset path VP.

    [0147] The motors 3.1 and the image recording by the camera 1 are controlled by a control unit 4 connected thereto in such a way that during the recording of a camera image formed by the sensor values read out from the sensor pixels 1.3, the camera 1 is offset in the x-direction and/or in the y-direction relative to the display device 2.

    [0148] It is also possible that a plurality of camera images is captured while the camera 1 is offset with respect to the display device 2.

    [0149] The camera images are read out by the camera 1 in an evaluation unit 5. If a plurality of camera images is recorded while the camera 1 is moving, the individual camera images are superimposed, for example added or averaged, by the evaluation unit 5 to form a Moiré-corrected result image.

    [0150] FIGS. 2A and 2B schematically show the change in position of the display pixels 2.1 with respect to the sensor pixels 1.3 when the camera 1 is moved with respect to the display device 2. For better clarification, only a part of the matrix-like arranged display pixels 2.1 and a part of the matrix-like arranged sensor pixels 1.3 are shown in each case.

    [0151] The sensor pixels 1.3 are spaced apart by a sensor pixel pitch D.sub.S. For simplified representation, the sensor pixel pitch D.sub.S is selected to be the same in the vertical y′ direction as in the horizontal x′ direction; but different distances are also possible vertically and horizontally.

    [0152] In an analogous manner, the display pixels 2.1 are spaced with a display pixel pitch D.sub.A. In the event that the reproduction scale β′ achieved by the camera lens 1.1 deviates from 1, the display pixels 2.1 appear at a distance increased or decreased by this reproduction scale compared to the sensor pixel pitch D.sub.S. In particular, the resulting reproduction scale (RPS) determines which spatial frequencies appear erroneous in the camera image due to aliasing (Moiré).

    [0153] FIG. 2A shows the position of the sensor pixels 1.3 in a start position S. FIG. 2B shows the position of the sensor pixels 1.3 after a movement of the camera 1 along an offset path VP during an exposure. The offset path VP has a length which is referred to as offset V.

    [0154] In the projection onto the sensor surface 1.2, the offset path VP appears as an offset path image VP with a length that is referred to as image offset V′ in the following, taking into account the image-side imaging scale β′.

    [0155] The luminance imaged by one or more display pixel 2.1 onto a sensor pixel 1.3 (the image of the display image on the sensor surface) is convolved only with the pixel aperture (the point spread function PSF) of the respective sensor pixel 1.3 when the camera 1 is not moving.

    [0156] In the spatial frequency domain this means the multiplication of the spectrum of the imaged display image of the display device 2 (that is: the matrix of the display pixels 2) with the spectrum of the PSF (the module transfer function MTF). The matrix of all sensor pixels 1.3 means a sampling, in the frequency domain a multiplication of the resulting spectrum. In the process, higher frequency components can get into the base spectrum, which leads to aliasing.

    [0157] By moving the camera 1, the luminance (the image of the display image on the sensor) imaged by one or more display pixel 2.1 onto a sensor pixel 1.3 is convolved with the pixel aperture (point spread function PSF) of the respective sensor pixel 1.3 and additionally with a filter kernel h(x′, y′) determined by the offset path image VP′. The position curve of the offset path image VP is, starting from the start position S with the coordinates x′.sub.o,y′.sub.o, determined by the movement δ.sub.S′(t), δ.sub.y′(t) in (horizontal) x′-direction and in vertical y′-direction, respectively. The filter kernel h(x′, y′) assumes a value at the locations determined by the totality of the values {δ.sub.x′(t), δ.sub.y′(t)} which value is inversely proportional to the motion velocity νv(t) of the display image with respect to the camera image at these locations and which is in particular different from 0. At all other locations (x′, y′)∈/{δ.sub.x′(t), δ.sub.y′(t)} applies h(x′, y′)=0.

    [0158] In the spatial frequency range, this thus means the multiplication of the spectrum of the display image (matrix of display pixels 2.1) with the spectrum of the PSF (MTF) and the spectrum of the offset path image VP′.

    [0159] Therefore, by moving the camera 1, the PSF of the sensor pixels 1.3 is convolved with the offset path image VP′, which leads to a multiplication of the spectrum with the Fourier transform of the offset path image VP′ in the spatial frequency range, that is: to a filtering. Thus, the choice of the offset path image VP′, which is determined by the offset path VP, offers the possibility of suppressing or reducing interfering frequency components.

    [0160] Thus, the offset path VP can be used to determine a filter kernel h(x′, y′) for a location-continuous filtering, in particular for a smoothing, which removes or suppresses Moiré interference in the camera image. With the proposed teach-in method, an offset path VP can be determined by discrete optimization in such a way that the best possible reduction of Moiré interference is achieved for the combination of the camera 1 with the display device 2.

    [0161] The advantage over discrete-location filtering of the camera image is that Moiré interference can be selectively removed or reduced at, in principle, arbitrary spatial frequencies. In particular, spatial frequencies that are not an integer multiple of the reciprocal of the sensor pixel pitch

    [00005] f o = 1 D S

    and that are therefore not accessible to a spatially discrete filtering of the discretized camera image can also be attenuated or eliminated by a spatially continuous filtering with the filter kernel h(x′, y′). Thus, a suppression of Moiré interference is possible while maintaining a high spatial resolution.

    [0162] An embodiment relating to a further development of the invention is explained below with reference to FIGS. 3A, 3B and 4.

    [0163] FIG. 3A schematically shows a first magnitude plot |X.sub.τ1(f.sub.x′, f.sub.y′) | over a horizontal and a vertical spatial frequency f.sub.x′, f.sub.y′. The first magnitude plot |X.sub.τ1(f.sub.x′, f.sub.y′) | is the magnitude of the Fourier transform |X.sub.τ1(f.sub.x′, f.sub.y′) | of a first camera image exposed over a first exposure time τ1.

    [0164] The exposure of the first camera image is carried out in such a way that the camera 1 is not moved or is moved only slightly relative to the display device 2 during the first exposure time τ1. “Slightly moved” shall be understood here and in the following as a movement that causes an offset V that is smaller than the display pixel pitch D.sub.A, preferably smaller than D.sub.A/3. In other words, the offset path VP caused by the movement of the camera 1 relative to the display device 2 has an offset V parallel to the sensor surface 1.2 that is smaller than the display pixel pitch D.sub.A, preferably smaller than D.sub.A/3.

    [0165] The first camera image can be captured by selecting the first exposure time τ1 to be very short relative to the speed of movement of the camera 1 (that is: selected such that the offset path image VP′ is very short). Thereby, a shortening of the first exposure time τ1 can be compensated by increasing the brightness of the display pixels 2.1 inversely proportional to the change of the first exposure time τ1.

    [0166] Additionally or alternatively, the recording of the first camera image can occur by reducing or stopping the movement of the camera 1 during the first exposure time τ1.

    [0167] Due to the comparatively very short or completely suppressed image offset V′, the filter effect of the filter kernel h(x′, y′) determined by the offset path image VP′, explained with reference to FIGS. 2A and 2B, is reduced. In the extreme case, when the camera 1 is completely at rest with respect to the display device 2, the filter kernel h(x′, y′) degenerates into a Dirac pulse and causes no change in the first camera image recorded by the sensor pixels 1.3.

    [0168] Accordingly, the Fourier transform X.sub.τ1(f.sub.x′, f.sub.y′) shows aliasing (i.e.: Moiré-interference). In FIG. 3A, light gray values indicate high magnitude amplitudes of the Fourier transform X.sub.τ1(f.sub.x′, f.sub.y′), while dark gray values indicate low magnitude amplitudes. The horizontal dimension in FIG. 3A corresponds to the horizontal spatial frequency f.sub.x′, which is associated with the horizontal x′-direction of the sensor surface 1.2. The vertical dimension in FIG. 3A corresponds to the vertical spatial frequency f.sub.y′, which is assigned to the vertical y′-direction of the sensor surface 1.2.

    [0169] The aliasing (Moiré) in the first camera image causes prominent, bright (white) first perturbations M in the Fourier transform X.sub.τ1(f.sub.x′, f.sub.y′) in the manner of vertical and horizontal lines corresponding to high magnitude amplitudes at certain horizontal and vertical spatial frequencies.

    [0170] FIG. 3B shows, in a manner analogous to FIG. 3A, a second magnitude plot |X.sub.τ2(f.sub.x′, f.sub.y′) | of the Fourier transform of a second camera image exposed over a second exposure time τ2 and in which the same display image, represented by the same display device 2 has been captured as in the first camera image.

    [0171] In contrast to the first camera image, the exposure of the second camera image is performed in such a way that an offset path image VP is generated with an offset V parallel to the sensor surface 1.2 by moving the camera 1 relative to during the second exposure time τ2, which offset V is between one and five times the display pixel pitch D.sub.A. In other words, the second camera image is recorded as already explained with reference to FIGS. 2A and 2B.

    [0172] Also in the Fourier transform of the second camera image, aliasing (Moiré) appears as second disturbance M.sub.2. However, the already explained filter effect of the filter kernel h(x′, y′) determined by the offset path image VP causes a significant reduction of this second interference M.sub.2 compared to the first interference M.sub.1 in the Fourier transform of the first camera image.

    [0173] The suppression of the aliasing (Moiré) achieved at the second camera image with the offset V of the camera 1 with respect to the display device 2 during the second exposure time τ2 can be described in terms of systems theory as transfer function of an offset filter, which results from the quotient of the Fourier transform of the second camera image X.sub.τ2(f.sub.x′, f.sub.y′) (with Moiré—suppressing offset V) related to the Fourier transform of the first camera image X.sub.τ1(f.sub.x′, f.sub.y′) (without Moiré—suppressing offset V) results.

    [0174] In particular, the frequency-selective suppression by the offset V during the recording of the second camera image can be described as the amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| of such an offset filter.

    [0175] For example, the amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| can be determined from the magnitude ratio of the second to the first magnitude response:

    [00006] .Math. "\[LeftBracketingBar]" G τ 2 ( f x , f y ) .Math. "\[RightBracketingBar]" = .Math. "\[LeftBracketingBar]" X τ 2 ( f x , f y ) .Math. "\[RightBracketingBar]" .Math. "\[LeftBracketingBar]" X τ 1 ( f x , f y ) .Math. "\[RightBracketingBar]"

    [0176] Alternatively, the determination of the amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| can also be based on a local segmentation which, in the manner of an image processing operation, assigns a binary value |{tilde over (G)}.sub.τ2(f.sub.x′, f.sub.y′)|∈{0, 1} to each magnitude value of the amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| in a threshold-dependent manner

    [0177] For example, this image processing operation can be applied to the magnitude ratio of the second to the first magnitude response. This transforms the amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| into a binary amplitude response |{tilde over (G)}.sub.τ2(f.sub.x′, f.sub.y′)|. In general, other operations or additional operations, such as logical operations or morphological operations from the field of image processing, can be used to derive an amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| from |X.sub.τ1(f.sub.x′, f.sub.y′) | and |X.sub.τ2(f.sub.x′, f.sub.y′) |. For example, it is possible to binarize the first and second magnitude response |X.sub.τ1(f.sub.x′, f.sub.y′) |, |X.sub.τ2(f.sub.x′, f.sub.y′) | before forming the quotient in the same or similar way as the amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| and then to derive an amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| by means of logical operators like an exclusive-or-operation (XOR).

    [0178] In general, the goal in defining an offset filter is to determine an amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| such that in it pairs of horizontal and vertical spatial frequencies (f.sub.x′, f.sub.y′) are assigned a low magnitude or zero, if a high amplitude is determined there in the first (unadjusted) magnitude profile |X.sub.τ1(f.sub.x′, f.sub.y′) | and a comparatively very low amplitude is determined in the second (adjusted) magnitude profile |X.sub.τ1(f.sub.x′, f.sub.y′) |.

    [0179] FIG. 4 schematically shows the amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| of an offset filter V, which is determined from the first and the second magnitude response |X.sub.τ1(f.sub.x′, f.sub.y′) |, |X.sub.τ2(f.sub.x′, f.sub.y′) |. The amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| specifies the degree of attenuation (or amplification) to which a camera image is subjected in a frequency-selective manner for a parameter combination of a horizontal and a vertical spatial frequency.

    [0180] The application of a binary amplitude response |{tilde over (G)}.sub.τ2(f.sub.x′, f.sub.y′)| has the advantage over an offset filter with continuous amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| that (residual) Moiré present in |X.sub.τ2(f.sub.x′, f.sub.y′) | can also be suppressed, as long as it is only significantly weaker than in |X.sub.τ1(f.sub.x′, f.sub.y′) |, since all frequency components significantly different between |X.sub.τ1(f.sub.x′, f.sub.y′) |, and |X.sub.τ2(f.sub.x′, f.sub.y′) | are completely suppressed. Moreover, simple additional morphological operations on the binary amplitude response |{tilde over (G)}.sub.τ2(f.sub.x′, f.sub.y′)|, such as dilation, can further increase the robustness of the method.

    [0181] By using an offset filter, it is possible to perform Moiré suppression even in a camera image that was taken with no or only insignificant offset V during exposure. For this purpose, this camera image is subjected to a Fourier transformation. The Fourier transform of the camera image (X(f.sub.x′, f.sub.y′)) is frequency-selectively multiplied by the amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| of the offset filter determined from the first and the second camera image as explained above:


    X′(f.sub.x′, f.sub.y′)=X(f.sub.x′, f.sub.y′).Math.|G.sub.τ2(f.sub.x′, f.sub.y′)|

    [0182] The spatial frequency weighted Fourier transform obtained in this way X′(f.sub.x′, f.sub.y′) is particularly attenuated at spatial frequencies f.sub.x′, f.sub.y′ at which strong first perturbations M.sub.1 are present in the first camera image but no or only small perturbations M.sub.2 are present in the second camera image, because for these spatial frequencies the amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| of the offset filter becomes nearly zero or zero. According to the invention, the spatial frequency weighted Fourier transform X′(f.sub.x′, f.sub.y′) is then subjected to an inverse Fourier transform (Fourier inverse transform), as a result of which a Moiré-corrected result image is obtained.

    [0183] In other words: If the geometrical (with respect to the arrangement of camera 1 and display device 2) and the essential lighting conditions are not or only slightly changed compared to the recording of the first and the second camera image, then by applying the amplitude response |G.sub.τ2(f.sub.x′, f.sub.y′)| to the Fourier transform of a camera image and a subsequent inverse Fourier transform (Fourier inverse transform), the same or similar suppression of aliasing (Moiré) interference can be achieved as by moving the camera 1 relative to the display device 2 along the same offset path VP as was selected when the second camera image was recorded.

    [0184] This makes it possible to achieve the same or similar suppression of disturbances M.sub.1, M.sub.2 even if the camera 1 is not moved or is moved only slightly during the recording of the camera image. In particular, this makes it possible to avoid a mechanical load on the camera 1 and/or the display device 2, such as is generated by vibration-like offset movements.

    [0185] Furthermore, it is thereby also possible to suppress disturbances in camera images which are recorded with a very short exposure time and in which, as a result, no offset movement is possible which would be required for sufficient disturbance suppression. In addition, it is possible to suppress disturbances in further camera images which were recorded in a similar recording situation, for example camera images from a different but identically constructed display device.

    [0186] The third and a plurality of other camera images may be taken from different display devices 2 than the first and second camera images. For example, display devices 2 that are continuously manufactured identically in a production process can be arranged within the framework of usual tolerances for quality control to be in the same position relative to the camera 1 as a first display device 2 from which the first and second camera images were taken.

    [0187] The third and further camera images are then recorded by these continuously exchanged display devices 2. Moiré interference in the third and further camera images is removed or attenuated by application of the offset filter determined from the first and second camera images for a different but identically constructed first display device 2.

    [0188] In order to obtain greater robustness against the minor geometric changes in the recording situation between identical display devices 2 that are to be expected in practice, a binary amplitude response |{tilde over (G)}.sub.τ2(f.sub.x′, f.sub.y′)| determined as described above can, for example, be processed by means of morphological operations such as dilatation. This also suppresses spatial frequencies that are in the immediate vicinity of the spatial frequencies originally detected as Moiré interference. Thus, in addition, even minor shifts of the aliasing frequencies caused by the slightly changed recording situation, for example an axial rotation around the surface normal of the display device 2, are at least partially compensated.

    LIST OF REFERENCE SIGNS

    [0189] 1 camera

    [0190] 1.1 camera lens, imaging optics

    [0191] 1.2 sensor surface

    [0192] 1.3 sensor pixels

    [0193] 2 display device

    [0194] 2.1 display pixels

    [0195] 2.2 column

    [0196] 2.3 row

    [0197] 3 positioning unit

    [0198] 3.1 motor

    [0199] 3.2 holding plate

    [0200] 4 control unit

    [0201] 5 evaluation unit

    [0202] D.sub.S sensor pixel pitch

    [0203] D.sub.A display pixel pitch

    [0204] f.sub.x′, f.sub.y′ horizontal, vertical spatial frequency

    [0205] |G.sub.τ2(f.sub.x′, f.sub.y′)| amplitude response

    [0206] M.sub.1, M.sub.2 first, second perturbations

    [0207] O optical axis

    [0208] S start position

    [0209] V offset

    [0210] V′ image offset

    [0211] VP offset path

    [0212] VP′ offset path image

    [0213] |X.sub.τ1(f.sub.x′, f.sub.y′) | first magnitude plot

    [0214] |X.sub.τ2(f.sub.x′, f.sub.y′) | second magnitude plot