METHOD FOR SIMULATING AN OPTICAL IMAGE REPRESENTATION

20220247906 · 2022-08-04

    Inventors

    US classification

    • 1/1

    Cpc classification

    G06T5/50 G06T5/50
    Loading...
    H04N23/71 H04N23/71
    Loading...
    G06T3/4007 G06T3/4007
    Loading...
    G02B27/0012 G02B27/0012
    Loading...
    H04N23/67 H04N23/67
    Loading...
    G06T15/06 G06T15/06
    Loading...
    G06T2207/20221 G06T2207/20221
    Loading...

    International classification

    H04N5/235 H04N5/235
    Loading...
    G06T3/40 G06T3/40
    Loading...
    G06T5/50 G06T5/50
    Loading...
    H04N5/232 H04N5/232
    Loading...

    Abstract

    A method for generating a brightness contribution for a picture element of an image includes providing a first data record including data which describe the effect on light rays of the lens, providing a second data record including data about a point of incidence of a light ray on the image recorder and about a virtual front surface, providing a transformation rule, calculating a first point of intersection of the light ray with the virtual front plane and a direction of the light ray at the first point of intersection by applying the transformation rule to the first and second data records, determining the brightness contribution of the light ray, storing an information item regarding the calculated brightness contribution. The first data record includes data about a second surface and the second data record includes data about a second point of intersection of the light beam with the second surface.

    Claims

    1. A method for generating a brightness contribution for a picture element of an image by simulating an image representation of a scene using an optical imaging system which comprises an image recorder located on a first surface and a lens, the method comprising: providing a first data record comprising data which describe an effect on light rays of the lens to be simulated; providing a second data record comprising data about a point of incidence of a light ray on the image recorder and about a virtual front surface; providing a transformation rule; calculating a first point of intersection of the light ray with a virtual front plane and a direction of the light ray at the first point of intersection by applying the transformation rule to the first data record and the second data record; determining the brightness contribution of the light ray; and storing an information item regarding the brightness contribution of the light ray, wherein: the first data record comprises data about a second surface, and the second data record comprises data about a second point of intersection of a light beam with the second surface.

    2. The method as claimed in claim 1, wherein: the lens has at least one adjustable imaging parameter, and the second data record contains information items about the at least one adjustable imaging parameter of the lens.

    3. The method as claimed in claim 2, wherein the at least one adjustable imaging parameter comprises at least one of a focus setting, a focal length, a magnification, and a field curvature of the lens.

    4. The method as claimed in claim 2, wherein the lens comprises a stop, and the second surface coincides with the stop.

    5. The method of claim 4, wherein: the stop is an aperture stop, and the second surface coincides with the aperture stop.

    6. The method as claimed in claim 4, wherein one of the at least one adjustable imaging parameter describes at least one dimension of the aperture stop.

    7. The method as claimed in claim 5, wherein the aperture stop is at least approximately circular and information items relating to the second point of intersection of the light ray with the second surface contain a normalized radius.

    8. The method as claimed in claim 1, wherein the first data record comprises data in relation to at least one shading area in the lens and in relation to the effect of a part of the lens on at least one light beam, which extends between the at least one shading area and the image recorder, and wherein the method, before storing a brightness component of the at least one light ray, further comprises: calculating a third point of intersection with the at least one shading area; checking whether the at least one light ray is absorbed by or transmitted through the third point of intersection; and discarding the light ray or setting the brightness component to zero if the at least one light ray is absorbed.

    9. A method for generating the picture element of the image, the method comprising: selecting the point of incidence for light rays on the image recorder; selecting a plurality of different second points of intersection on the second surface; carrying out ray tracing as claimed in claim 1 for each of the second points of intersection; summing brightness contributions arising; and storing a result of a summation.

    10. A method for generating an image, the method comprising: selecting a plurality of picture elements on the image recorder; calculating the brightness contribution of the light ray incident on each of the picture elements with the method as claimed in claim 9; and storing results.

    11. The method as claimed in claim 10, wherein the brightness contribution of each of the light rays simulated to this end, which intersect the virtual front surface at the first point of intersection, is determined with a pinhole camera image in each case, a nature of the image being such that it corresponds to the image generated by a pinhole camera placed at the respective first point of intersection.

    12. The method as claimed in claim 11, wherein the virtual front surface and an entrance pupil of the lens to be simulated coincide.

    13. The method as claimed in claim 12, wherein the second surface coincides with the virtual front surface.

    14. The method as claimed in claim 11, wherein at least one of the pinhole camera images is calculated by interpolation or extrapolation from other pinhole camera images.

    15. A method for generating the image, the method comprising: providing a real image recorded by a real camera; providing a virtual image generated as claimed in claim 11; fusing or overlaying at least a portion of the real image and at least a portion of the virtual image; and storing the image created, wherein at least one adjustable lens parameter provided for a simulation correspond at least approximately to adjustable lens parameters used during a real recording.

    16. A method for generating an image sequence including individual images, the method comprising: providing a virtual scene; providing a camera position in relation to the virtual scene; calculating the individual images of the image sequence in accordance with the method as claimed in claim 11; and storing the image sequence.

    17. A computer readable non-transitory storage media encoded with software comprising computer executable instructions that when executed by a processor cause the processor to: calculate a first point of intersection of the light ray with a virtual front plane and a direction of the light ray at the first point of intersection by applying the transformation rule to the first data record and the second data record; determine the brightness contribution of the light ray; and store an information item regarding the calculated brightness contribution of the light ray, wherein the first data record comprises data about a second surface, and the second data record comprises data about a second point of intersection of the light beam with the second surface.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0024] The disclosure will now be described with reference to the drawings wherein:

    [0025] FIG. 1 schematically shows the principle of image recording;

    [0026] FIG. 2 schematically shows the principle of the simulation according to an exemplary embodiment of the disclosure;

    [0027] FIG. 3 schematically shows the effect of the optical imaging of a lens;

    [0028] FIG. 4 schematically shows the construction of the first data record;

    [0029] FIG. 5 schematically shows the construction of the second data record;

    [0030] FIG. 6 schematically shows the principle of a ray tracing simulation;

    [0031] FIG. 7 schematically shows the optical structure of an exemplary lens to be simulated;

    [0032] FIG. 8 schematically shows the positioning of pinhole camera images relative to the entrance pupil;

    [0033] FIG. 9 schematically shows an arrangement of pinhole camera images relative to the entrance pupil, in the form of a Fibonacci spiral;

    [0034] FIG. 10 schematically shows a random arrangement of pinhole camera images relative to the entrance pupil;

    [0035] FIG. 11 schematically shows the exemplary construction of an image simulation on the basis of an abstract stairway object and an associated pinhole camera image of this stairway object; and

    [0036] FIG. 12 schematically shows images of an abstract stairway object simulated according to an exemplary embodiment of the disclosure with one, a few and many pinhole camera images used to this end, and the positioning of the pinhole cameras relative to the entrance pupil.

    DESCRIPTION OF EXEMPLARY EMBODIMENTS

    [0037] The image recording principle is shown schematically in FIG. 1. A given lens 1, to be simulated, for photographic image recording includes lens elements 2 and frequently a size-adjustable aperture stop 3. Each lens has a certain radius and is held by a non-transparent mount 4. The image recorder 100, which is also referred to as sensor below, is attached at a position that depends on the specific design of the optics of the lens 1 to be simulated. The sensor 100 to be simulated may include a plurality of light detectors, the so-called pixels 5, which are usually arranged in the form of a grid. Each of these pixels records a picture element during the recording. The image arises optically at the location of the sensor 100 and it is there in a real camera system that there is an electronic or chemical detection of the respective incident luminous intensity at each pixel or picture elements: the recording of a picture element. The sensor may be equipped with a grid-shaped color filter 101, for example arranged in a so-called Bayer pattern, such that each pixel 5 can only detect a certain color. Another possibility is that each pixel 5 can for example detect all three primary colors, for example red, green, and blue. In the simulation, this corresponds to the registration and storage in the processed computer system of at least the calculated intensity and if also required the color of the simulated incident light ray. Storage is typically implemented in the so-called random-access memory, the RAM, but may also be implemented for example in what is known as a flash memory or on a magnetic disk. Light rays which emanate from a point in a scene to be imaged and potentially contribute to the image representation pass through the lens and are refracted at the interfaces of the lens elements in the process, that is to say their direction is altered in each case in accordance with the optical law of refraction. One portion of these light rays is incident on parts of the mount 4 of the lens 1 and is absorbed and does not contribute to the image representation. The absorption of rays at parts of the lens is referred to as shading and brings about what is known as vignetting. Another portion of the light rays may be incident on the aperture stop 3, is absorbed there and does not contribute to the image representation. A further portion of the light rays passes through the lens and strikes the image recorder 100. These light rays contribute to the image representation.

    [0038] The lens 1 to be simulated may include one or more parameters that are adjustable on the lens 1. Inter alia, these may be one or more of the following:

    [0039] An aperture stop: at least one approximately circular stop is mounted in lenses, the optically transmissive diameter of said stop may be altered by the user. Often this is made of what are known as iris diaphragms.

    [0040] For setting the focus, a lens may be equipped with one or more lens elements which are displaceable along the optical axis relative to the image detection plane and typically also displaceable relative to other optical elements.

    [0041] For variably adjusting the focal length, a lens may be equipped with one or more lens elements which are displaceable along the optical axis relative to the image detection plane and typically also displaceable relative to other optical elements. Instead of the focal length, the magnification of the lens may also be used as a continuous input variable. The magnification is a scaling factor which arises from the ratio of the size of an image of an object on the sensor or film plane and the size of this imaged, focused object. It was found that the choice of magnification as input variable is particularly well suited to the method according to an exemplary embodiment of the disclosure.

    [0042] For variably adjusting the field curvature, a lens may be equipped with one or more lens elements which are displaceable along the optical axis relative to the image detection plane and typically also displaceable relative to other optical elements.

    [0043] For the variable manipulation of a wavefront of the light passing through the lens, the lens may be equipped with displaceable free-form elements. By way of example, these can be one or two so-called Alvarez elements which are displaceable perpendicular to the optical axis.

    [0044] Such parameters that are adjustable on the lens 1 to be simulated correspond to typically continuously adjustable input variables in a lens simulation. The known teaching regarding the simulation of the optical imaging by lenses from Schrade et al. serves as a start point for the method according to an exemplary embodiment of the disclosure. In this known teaching, the continuously adjustable input variables are defined before the calculation of the coefficients that model the lens and determine the values of the latter. Should these values be altered, the coefficients modeling the lens need to be recalculated in this simulation method.

    [0045] The simulation according to an exemplary embodiment of the disclosure of the optical imaging of a lens to be simulated is a based on a transformation rule 900. This is shown schematically in FIG. 2. In a calculation 4000 carried out by a computer 300, this transformation rule 900 processes data from one or more input data records 1000 and converts these into an output data record 2000. The transformation rule 900 may include, inter alia, a polynomial. The transformation rule 900 may include, inter alia, a spline function. The effect of the optical imaging shown in FIG. 3 is expressed in that a simulated ray 800, which is emitted by the simulated object and which intersects a surface, differing from the simulated sensor 100, at a certain position and with a certain direction as incident ray 2010, following the passage through the lens is incident on a point of the simulated sensor 100 as emergent ray 2020 in a certain direction, this direction generally deviating from the original ray direction. The resultant beam offset and the resultant change in direction of the traversing light ray 800 is the effect of the lens 1. The rays referred to as incident ray 2010 and emergent ray 2020 are partial rays of the ray 800 passing through the lens. Some of the information required for the complete description of the light ray 800 may also be information regarding polarization, one or more wavelengths or colors, in each case connected with an intensity.

    [0046] The effects on a traversing ray 800 caused by the lens 1 may include a ray offset, a change in direction, a change in polarization, a change in color, an attenuation of the ray intensity, a split of the ray into partial rays or other changes. A plurality of these effects may also occur simultaneously in relation to one ray 800. By way of example, a split into partial rays may be caused by a partial transmission and reflection at optically refractive surfaces or at coating surfaces, by a split into ordinary and extraordinary rays in optically anisotropic crystals, or by diffraction at diffractive structures or holograms. All these effects are effects of a lens 1 that are detected according to the disclosure.

    [0047] For practical reasons, optical simulations are often carried out in such a way that the rays are traced in the opposite direction, that is to say starting from the sensor. This description of the disclosure follows this approach but explicit reference is once again made to the fact that, according to an exemplary embodiment of the disclosure, a ray simulation in the light direction is captured by the exemplary embodiment of the disclosure.

    [0048] According to an exemplary embodiment of the disclosure, the simulated sensor 100 is located on a first surface, the sensor surface 110. Light rays 800 to be simulated are incident on this surface at a point of incidence 111.

    [0049] According to an exemplary embodiment of the disclosure, a first data record 1010, the input data record, contains, inter alia, data suitable, in conjunction with the transformation rule 900, for describing the effect of the simulated lens on simulated light rays 800 that pass through the simulated lens. In this case, the characteristics of the simulated lens are contained only in the first data record 1010 and not in the transformation rule 900. By contrast, neither the data record 1010 of the transformation rule 900 contain data about light rays to be simulated or parameters that are adjustable on the simulated lens. Together, the transformation 900 and the data record 1010 may be referred to as a virtual lens 901 since the sharing of only these information items is sufficient for the user to be able to create renderings of virtual scenes with their own data. As shown schematically in FIG. 4, the first data record 1010 may contain one or more partial data records 1010r, 1010g, and 1010b, which are each assigned to a certain wavelength of the light, or to a certain wavelength band. This takes account of chromatic imaging aberrations, which occur in real lenses for physical reasons. Typically, three such partial data records 1010r, 1010g, and 1010b are kept available, specifically for red, green, and blue. However, use can also be made of more or fewer such partial data records. According to an exemplary embodiment of the disclosure, color effects may also be calculated by a differential approach from only one data record 1010, especially if the chromatic aberrations are small.

    [0050] According to an exemplary embodiment of the disclosure, the first data record 1010 furthermore contains data which include information about a first surface or virtual entrance surface or virtual front surface 1011, which is intersected by simulated rays of the imaging beam path or the continuation thereof at a first point of intersection 1013. This surface is typically disposed upstream or downstream of the simulated sensor. Typically, this first surface or virtual front surface 1011 is a rotationally symmetric surface, the axis of symmetry of which corresponds to the optical axis OA of the simulated lens, and which is typically disposed upstream of the actual surfaces of the lens to be simulated. In particular, but without this list being exhaustive, it includes spherical surfaces and other surfaces formed by rotating conic sections, and planes. The first surface of virtual front surface 1011 may coincide with a lens-element surface of the simulated lens, but this need not be the case. Typically, the virtual front surface 1011 precisely does not coincide with a lens-element surface of the simulated lens. Another typical position for the virtual front surface is the entrance pupil of the lens. Particularly typically, the virtual front surface 1011 is placed such that it is further away from the sensor 100 than the lens-element surface which is furthest away from the sensor. Typically, the virtual front surface is chosen such that its position and shape remain constant when there is a change in the focal length, the focus or other adjustable lens parameters. This reduces the complexity of the simulation and increases the confidentiality since the virtual front surface 1011 thus contains no information whatsoever about the real construction of the lens to be simulated.

    [0051] The data record may include a polynomial. Depending on the desired fitting precision of the ray position or the ray direction and the information regarding shading surfaces that generate vignetting, the polynomial may include at least 5, or at least 10, or at least 20, or at least 30 coefficients. By way of example, a sparse polynomial with 20 coefficients of no more than 6th order for ray position and ray direction, and with 5 coefficients of no more than 3rd order for shading information may be used.

    [0052] A second data record 1020, schematically shown in FIG. 5, contains, inter alia, information items 1021 about a simulated light ray 800, including information about a point of incidence 111 of this simulated light ray 800 on the sensor surface 110 and a directional information item 112 for this ray. The information about the point of incidence 111 may be provided by a three-dimensional vector in a coordinate system linked to the simulated lens, or else by two-dimensional parameters which describe the position on the sensor surface 110 in parameterized fashion, as is shown in FIG. 6. By way of example, without this list being exhaustive, this may be an azimuth angle and a radius or a normalized radius or an elevation angle. The directional information item 112 of the simulated ray may be given, in exemplary fashion and without this list necessarily being exhaustive, by the specification of a three-dimensional vector or else by the specification of two angles in relation to a coordinate system linked to the simulated lens. A further particularly typical representation of the direction according to an exemplary embodiment of the disclosure may also include the parameterized specification of a second point of incidence 201 with a second surface 200, shown in FIG. 6, which in conjunction with the point of incidence 111 yields a bijective relationship for the directional information item 112. For the parameterized specification of the second point of incidence 201 with the second surface 200, it was found that the specification of a radius r.sub.n normalized to 1 is particularly advantageous because the directional information item 112 in that case merely includes an angle between 0 and 360 degrees and a radius in the range 0°<° r.sub.n≤° 1, and a specification of the axial position of the second surface 200 is not required any more either. All that is required for the further calculation is a set of two parameters, for example angle and normalized radius, while the axial position may remain unconsidered. It was found to be advantageous to choose the position of an aperture stop for the second surface 200 because the most rays are frequently shaded there, especially if the lens is stopped down. Then, the maximum normalized radius r.sub.n°=° 1 is chosen for the simulation of the lens with the open stop, while a corresponding smaller value for is chosen when simulating a stopped-down lens. The second data record 1020 may include these information items 1022 about the stopping down of the lens to be simulated. Since the actual axial position of the second surface 200 and the actual size of the aperture stop are unknown, or at least need not be known, on account of the parameterization while the effect of the aperture stop is reproduced by a suitable choice of the radius r.sub.n, maximum confidentiality regarding the actual position of the aperture stop can be ensured.

    [0053] In addition to the typical information about the radius of an approximately circular aperture, data describing a change in shape of the aperture stop may also be contained. By way of example, the iris diaphragm of a Carl Zeiss Sonnar 2.8/135 with C/Y-mount and a set f-number of 4 exhibits a pronounced deviation from rotational symmetry. The transmissive region of the stop or aperture stop is described by at least one dimension, for example a radius; however, other dimensions can also be used to describe more complex forms of the stop.

    [0054] Furthermore, the second data record 1020 may include information items 1023 relating to one or more further imaging parameters that are adjustable on the lens and influence the imaging. According to an exemplary embodiment of the disclosure, this includes, by way of example but without this list being exhaustive, the focal length and/or the focal distance of the simulated lens and/or a parameter for variable adjustment of the field curvature. As an alternative to the focal length of the lens, the imaging scale may also be contained, in relation to the imaging scale of objects imaged in focus into the sensor plane. Furthermore, the data record 1020 may include information about beam deflections by variable mirrors or other optical elements, such as a diffractive elements, polarizers, optical filters such as neutral density filters, frequency filters, variable beam splitters or Alvarez plates, or other movable free-formed surfaces.

    [0055] Furthermore, the second data record 1020 may include information items 1024 relating to one or more wavelengths, wavelength ranges or colors of the light rays to be simulated.

    [0056] Consequently, the second data record 1020 contains all variables chosen or effected by the user of the simulation, such as parameters that are adjustable on the simulated lens and information about light rays to be simulated 1021, 1024. By contrast, the first data record 1010 only contains the data which contain the imaging behavior of the lens for all conceivable combinations of the adjustable parameters 1022, 1023. It is self-evident that the full range of parameters 1022, 1023 that should be admitted is considered during the training or optimization phase for creating the first data record 1010. Usually, this range is specified by the possibilities of the real model of the lens to be simulated.

    [0057] To perform the simulation according to an exemplary embodiment of the disclosure of a given lens, the first data record 1010 is required first before the actual simulation is carried out, said first data record, by application of the transformation rule 900, describing the imaging behavior of the lens to be simulated and the virtual front surface 1011 of the lens 1 to be simulated. The data record 1010 can be obtained by calculation, as set forth below.

    [0058] Initially, a second surface 200 is chosen between the position of the sensor and the position of the virtual front surface. The data record 1010 is precalculated while taking account of the possible adjustable parameters 1022, 1023 and the second surface 200. To this end, the following steps are carried out for one or more wavelengths or wavelength ranges:

    [0059] Ray tracing is carried out with the aid of the exact optical construction of the lens to be simulated for a plurality of rays and a plurality of settings of the adjustable parameter or parameters such as focal distance or focal length.

    [0060] The data record 1010 is calculated by way of an optimization algorithm. To this end, use can be made of known iterative optimization algorithms or fit algorithms from the prior art. A particularly suitable algorithm is known as the “orthogonal matching pursuit” algorithm. The result of this fitting or optimization process is the data record 1010, which supplies position and direction of the output ray for a given input ray and given adjustable lens parameters while taking account of the second data record 1020.

    [0061] In addition to this optimization process, one or more locations may be identified in the lens 1 to be simulated, rays being shaded and hence absorbed at said locations because they run into the housing of the lens or into a stop, for example. The aperture stop 3, the diameter, and/or the shape of which can be chosen, is a separately labeled and typical stop or shading surface 210. Many lenses contain such a stop. The information about all shading surfaces 210 considered becomes part of the data record by virtue of a plurality of partial data records 1030 being formed. Each of these partial data records 1030 contains information about the coefficients which are suitable for the partial simulation of the lens to be simulated, from the sensor surface to the respective shading surface 210. The actual axial positions of these shading surfaces 210 are not relevant. A ray position can be specified for each shading surface 210 by way of a parameterized approach, the parameters for example including an azimuth angle and a radius or an elevation angle or, particularly typically, a normalized radius.

    [0062] The data record 1010 must at least include information about the coefficients which are suitable for simulating the ray paths from the sensor surface to the virtual front surface 1011 or to the lens front surface or to the entrance pupil of the lens 1.

    [0063] The virtual front surface 1011 may be defined in advance. However, it is also possible to also optimize the virtual front surface 1011 as a part of the optimization process for the data record 1010.

    [0064] As an alternative to this method of optimized calculation, the data record 1010 may also be obtained with the aid of measurements on at least one physical embodiment of the lens to be simulated. To the end, exemplary objects can be imaged using the lens and the arising images can be recorded. This may be carried out for a plurality of settings of the parameters that are adjustable on the lens, for example focus and/or focal length/magnification and/or opening of the aperture stop. The required set of coefficients which forms the data record 1010 can then be established by back-calculation from the images obtained. A further option for obtaining the data record by a measurement is to shine a single light beam at a suitable wavelength, for example a laser beam, through a physical embodiment of the lens to be simulated. The effect of the lens on the beam can be detected at the other end of the lens by measurement. This can be carried out for further wavelengths and different points of incidence and directions of the light beam. The required set of coefficients which forms the data record 1010 can then be established by back-calculation from the measurement values thus obtained.

    [0065] It is self-evident that the values of the data record 1010 obtained by calculation and/or measurement can still be subsequently altered empirically, for example to reproduce manufacturing tolerances or imperfections of physical embodiments of the lenses.

    [0066] To carry out the simulation according to an exemplary embodiment of the disclosure of the imaging of a given lens 1, a computer program product is loaded onto a non-transitory computer readable storage medium of a computer 300, said computer program product putting the computer 300 into a position where it can carry out, by a processor, a calculation in accordance with the transformation rule 900. The previously pre-calculated first data record 1010, which contains information about the lens to be simulated and the virtual front surface 1011, is likewise loaded onto the computer 300.

    [0067] A data record 1020 is generated, said data record including information items 1021 about a ray 2010 to be simulated that is incident on the sensor and information items about one or more parameters 1022, 1023 that are adjustable on the lens. In addition to the color or wavelength, the information items 1021 about the ray 2020 to be simulated include a point of incidence 111 of the ray on the sensor 100 and a directional information item 111, which may also be provided from an information item about the second point of intersection 201 with a second surface 200 which may contain the aperture stop. The point of incidence 111 typically corresponds to the position of a simulated pixel of the simulated sensor 100. Typically, only those rays which are incident on the second surface 200 only within the region that is transmissive according to the information item 1022 about the set stop are considered. In the case of a circular aperture stop with a normalized radius r.sub.b, this may be realized in such a way that only rays to be simulated with points of incidence with a radius r°<° r.sub.b are generated and traced. In the case of complicated geometries of the aperture stop 3 or the shaded surface 210, it is possible to generate rays that strike the second surface 200 outside of the region that is transmissive according to the information item 1022 about the set stop, but these are discarded since only rays that come from the transmissive region of a stop can reach the sensor. The procedure described here for the aperture stop can be carried out analogously for all other shading surfaces 210 possibly present in the lens.

    [0068] Then, to simulate an individual light ray 800 which contributes to the image representation to be simulated at a pixel to be chosen, the computing unit of the computer 300 carries out a calculation according to the transformation rule 900 with the input data records 1010, 1020, this yielding an output data record 2000 which contains which incident ray 2010 is converted by the simulated lens into the ray 2020 of the input data record 1021 that is incident on the sensor. If one or more shading surfaces, for example an aperture stop, are present in the lens to be simulated, the simulation can run in a plurality of partial steps, wherein a calculation is carried out in each case from the sensor surface 110 to one of the shading surfaces 210. A separate partial step is calculated for each of the shading surfaces 210, the sequence in which these partial steps are carried out being irrelevant. Typically, surfaces causing much shading are considered before those that bring about a little shading. The partial steps can also run in parallel. A test step follows in each case, regarding whether the ray is absorbed by or transmitted through the shading surface at a third point of intersection 211. Absorbed rays are discarded, transmitted rays are traced further. Ray tracing is also implemented from the sensor surface to the virtual front surface 1011 or the lens front surface or the entrance pupil. The output data record 2000 obtained in this way contains information items about the ray 2010 incident on the virtual front surface 1011 or the lens front surface or the entrance pupil. These include the first point of intersection 1013 of the ray 2010 with the virtual front surface 1011 and a directional information item, described for example by a three-dimensional vector or by a parameterized representation of the point of intersection by another surface, or by two angles in relation to a coordinate system linked to the simulated lens.

    [0069] To calculate color images, the simulation step for simulating an individual light ray can optionally be carried out multiple times, typically three times, with a different wavelength or different wavelength band of the simulated light being assumed in each of these ray tracing calculations and a different partial data record 1010r, 1010g, and 1010b being able to be used for the simulation. For simulated lenses with small chromatic aberrations ray tracing for only one data record 1010 may be sufficient, with a differential correction of the result ray directions and/or result ray positions being carried out for at least one of the considered wavelengths. The simulations step including a plurality of individual ray simulations for different wavelengths or including a single ray simulation with a subsequent differential correction step for further colors shall be referred to below as polychromatic ray simulation step.

    [0070] On the basis of the output data record for a given ray 2020, the intensity of the incident light 2010 can be deduced on the basis of the model of the scene to be imaged, from which the intensity contribution or, equivalently, the brightness contribution of this simulated light ray to the signal of the considered pixel in the respectively considered wavelength or color arises.

    [0071] For the same target pixel, the described monochromatic or polychromatic ray simulation step is carried out for a plurality of rays to be simulated. These are chosen such that they emanate from different positions on the second surface 200, typically the aperture stop 3.

    [0072] A brightness contribution, the intensity of the incident light ray 2010 at a given wavelength, is determined from information about the light coming from the object feature of the modeled scene observed in this direction for each of these simulated rays and all brightness contributions for a picture element are added up. The brightness contributions of shaded rays are discarded or set to zero so that they do not contribute. The intensities or brightness levels of the picture elements thus obtained are stored in a computer memory, typically in a random-access memory, the RAM, or in a flash memory or on a magnetic disk.

    [0073] The described simulation steps are repeated for further pixels of the sensor until a first image is completely constructed.

    [0074] To generate an image sequence for a cinematographic image sequence, the described steps are repeated for further images. In this case, the scene to be modeled, the position of the camera to be simulated and/or the adjustable lens parameters such as for example focus and/or focal length and/or aperture stop may change. Such changes are required for cinematographic effects in particular, where for example the attention of the observer is steered from one object feature to another by a focal shift. As described, there is no recalculation of the data record 1010 in the case of such changes. Changes in adjustable lens parameters are only incorporated in the data record 1020, which brings a substantial speed advantage over methods from the prior art and makes the rendering of such scenes by way of the method according to an exemplary embodiment of the disclosure particularly efficient.

    [0075] As described, this may still be followed by a compositing method step, in which the simulated image or the simulated image sequence is fused with actually recorded images.

    [0076] FIG. 7 illustrates a schematic lens-element section of an exemplary lens to be simulated, for an “infinity” focus setting. As may be gathered from the illustration in FIG. 7, the lens 1 includes a first, a second, a third, and a fourth lens element 7, 8, 9, and 10, which are successively arranged in this sequence along an optical axis OA of the lens 1 proceeding from the object side. The first and the third lens elements 7, 9 each have positive refractive power and the second and the fourth lens elements 8, 10 each have negative refractive power.

    [0077] The method according to an exemplary embodiment of the disclosure for simulating the effect of the lens of FIG. 7 may consequently include the following steps:

    [0078] A virtual front surface 1011 is defined first. The virtual front surface 1011 is located at a predefined distance in front of the image sensor 100 and has a predefined and/or optimized radius of curvature. In the exemplary embodiment, the distance from the sensor is 60.0° mm and the radius of curvature is 13.365° mm; however, other values may also be chosen, depending on application and/or need. In the second step, the lens model is created by a training phase. To this end, a plurality of training or validation rays are generated, each with a defined wavelength. By way of example, these may be more than 5000 training rays, but typically more than 10° 000 training rays, and/or more than 2000 validation rays, but typically more than 4000 validation rays, per wavelength and/or focus and/or considered value of one or more other variable lens parameters. The focus settings were generated in the exemplary embodiment. The exemplary embodiment assumed 9 approximately equally distributed focus settings and a 440° nm wavelength of the simulated light.

    [0079] The shading surfaces are identified in the next step. In the exemplary embodiment, these are the radius of the front surface of the first lens element 7 of 8.7° mm and a further shading surface 11 with a radius of 6.0° mm.

    [0080] In the next step, the parameter set for the abstract mathematical model given by the transformation rule is created with an optimization method. In the exemplary embodiment, this is a sparse polynomial with 20 coefficients of no more than 6th order for the ray position and direction, and with 5 coefficients of no more than 3rd order for the shading information.

    [0081] This exemplary model is optimized using the orthogonal matching pursuit method and the specified coefficients are calculated in the process.

    [0082] The resulting output data include at least one of the values from the following data: radius of curvature of the virtual front surface, distance between the virtual front surface and sensor, minimum f-number (F.sub.min), supported focal range (d), and focal length (f).

    [0083] The user of the exemplary lens model obtains the following metadata:

    [0084] Radius of curvature of the virtual front surface: 13.365° mm

    [0085] Distance between the virtual front surface and the sensor: 60.0° mm

    [0086] Minimum f-number: F.sub.min=2.87

    [0087] Supported focal range: d=507° mm to infinity

    [0088] Focal length: f=50.0° mm

    [0089] Input variables of the exemplary lens model:

    [0090] x.sub.s, y.sub.s: Ray position of the sensor, definition ranges 18.0° mm≤x.sub.s≤18.0° mm and −12° mm≤y.sub.s≤12° mm

    [0091] x.sub.a, y.sub.a: Ray position in the virtual stop, definition range: x.sub.a.sup.2+y.sub.a.sup.2<(F.sub.min/F).sup.2 for the f-number F


    β3=f/(f−d)

    [0092] Output variables of the exemplary lens model:

    [0093] x.sub.f, y.sub.f: Ray position projected onto a tangential plane at the apex of the virtual front surface in mm

    [0094] u.sub.f, v.sub.f: Ray position projected onto a tangential plane of the virtual front surfaces at the ray point

    [0095] x.sub.v1, y.sub.v1, x.sub.v2, y.sub.v2: Ray position on shaded surfaces. The ray is shaded if x.sup.2+y.sup.2>1 at one of the shading surfaces that generate vignetting.

    [0096] Added to this are the fit data of the parameterized optics of the lens to be simulated in the exemplary embodiment:

    [0097] x.sub.f

    TABLE-US-00001 Expo- Expo- Expo- Expo- Expo- Coefficient nent x.sub.s nent y.sub.s nent x.sub.a nent y.sub.a nent β Order −0.459663284 1 0 0 0 0 1 8.768720761 0 0 1 0 0 1 −1.304539732 1 0 0 0 1 2 0.000110631 3 0 0 0 0 3 −0.00496582 2 0 1 0 0 3 0.000109726 1 2 0 0 0 3 −0.005078146 1 1 0 1 0 3 0.091802222 1 0 2 0 0 3 0.067526222 1 0 0 2 0 3 0.025031505 0 1 1 1 0 3 −0.07933075 0 0 1 2 0 3 0.000671128 3 0 0 0 1 4 0.000725495 1 2 0 0 1 4 1.134481527 0 0 3 0 1 4 −5.36E−05 2 1 1 1 0 5 0.137138593 2 0 1 0 2 5 0.158010585 1 1 0 1 2 5 −0.784407072 1 0 0 2 2 5 3.288014989 1 0 0 0 4 5 0.000741927 3 0 2 0 1 6

    [0098] u.sub.f

    TABLE-US-00002 Expo- Expo- Expo- Expo- Expo- Coefficient nent x.sub.s nent y.sub.s nent x.sub.a nent y.sub.a nent β Order 0.014551901 1 0 0 0 0 1 −0.666745323 0 0 1 0 0 1 0.06893195 1 0 0 0 1 2 0.000145798 2 0 1 0 0 3 7.79E−06 1 2 0 0 0 3 −0.000358482 1 1 0 1 0 3 −0.00282992 1 0 2 0 0 3 0.002553763 1 0 0 2 0 3 −0.000240285 0 2 1 0 0 3 0.013381898 0 1 1 1 0 3 0.021614836 0 0 3 0 0 3 −0.159397768 0 0 1 2 0 3 −1.127168311 0 0 1 0 2 3 4.83E−05 1 2 0 0 1 4 −0.002460254 1 1 0 1 1 4 −0.00172234 0 2 1 0 1 4 −7.99E−06  1 2 2 0 0 5 0.000348182 1 1 2 1 0 5 −0.176833532 1 0 0 2 2 5 −0.47254192 0 1 1 1 2 5

    [0099] x.sub.v1

    TABLE-US-00003 Expo- Expo- Expo- Expo- Expo- Coefficient nent x.sub.s nent y.sub.s nent x.sub.a nent y.sub.a nent β Order −0.027987126 1 0 0 0 0 1 1.011147804 0 0 1 0 0 1 −0.000368083 2 0 1 0 0 3 −0.000342116 1 1 0 1 0 3 0.005565995 1 0 0 2 0 3

    [0100] x.sub.v2

    TABLE-US-00004 Expo- Expo- Expo- Expo- Expo- Coefficient nent x.sub.s nent y.sub.s nent x.sub.a nent y.sub.a nent β Order 0.031357313 1 0 0 0 0 1 0.926341281 0 0 1 0 0 1 0.043996073 1 0 0 0 1 2 −0.257282737 0 0 1 0 1 2 3.06E−06 3 0 0 0 0 3

    [0101] The coefficients for y.sub.f, y.sub.f, y.sub.v1 have not been illustrated as they emerge directly from those for x.sub.f, u.sub.f, x.sub.v1 and x.sub.v2, respectively, for reasons of symmetry. Here,

    [00001] x f = .Math. i , j , k , l , m c i j k l m x s i y s j x a k y a l β m

    with the coefficient c, the exponent i of x.sub.s, the exponent j of y.sub.s, the exponent of k of x.sub.a, the exponent 1 of y.sub.a, and the exponent m of β. This likewise applies to u.sub.f in the second table. One could also write:

    [0102] x.sub.f

    TABLE-US-00005 c.sub.ijklm i j k l m Order −0.459663284 1 0 0 0 0 1 8.768720761 0 0 1 0 0 1 −1.304539732 1 0 0 0 1 2

    [0103] It is understood that information about the scene to be imaged is required to calculate the intensity or brightness and the color of a pixel. If the directional information item about the first point of intersection 1013 of the light ray to be simulated which is incident on the virtual front surface 1011 is available, it is possible to carry out a back calculation as to the point on the modeled object from which the light incident there emanates. The information about the direction and the point of incidence or the first point of intersection 1013 of the simulated light ray can be implemented, for example, by the simulation method according to the disclosure, but also by other methods that supply equivalent results. Conventional ray tracing would be one example. Instead of the first point of intersection 1013 with the virtual front surface 1011, it is also possible to use a point of incidence on the first optically effective surface of the simulated lens or point of incidence in the entrance pupil of the simulated lens or a point of incidence on a surface located far in front of the lens and/or close to the scene to be modeled.

    [0104] One option for determining the object point from which a light ray 2010 incident in the lens emanates lies in conventional ray tracing. An advantage thereof is that the representations thus obtained are physically correct and hence photorealistic. A disadvantage is that such calculations are very complicated and therefore require much computation time. It is desirable to have a method available which makes information available regarding the color and intensity or brightness applied to a light ray emanating from a modeled object and incident on the simulated lens 1 or a virtual front surface 1011 in front of the lens significantly faster than when using ray tracing.

    [0105] It is therefore also an object of the disclosure to provide a method which makes available physically almost correct information regarding the color and intensity or brightness applied to a light ray emanating from a modeled object and incident on the simulated lens or a virtual front surface in front of the lens significantly faster than when using ray tracing.

    [0106] This object according to an exemplary embodiment of the disclosure is achieved, in conjunction with the described method for simulating lenses.

    [0107] In this context, it is desirable for this method according to an exemplary embodiment of the disclosure to be able to profit from the particular properties of the GPUs in relation to speed and parallelization.

    [0108] Knowledge of the ray directions, intensities and colors of the light rays 800 incident on an entrance surface 3000 is equivalent to the knowledge of what is known as the light field at this entrance surface. The nature of the entrance surface 3000 is such that it may contain for example the entrance pupil, the front surface of the front lens element or the virtual front surface 1011. In principle, any other surface may also be chosen provided the rays 800 contributing to the creation of the image pass through this surface. It is advantageous if the entrance surface 3000 is chosen to be larger than for example the entrance pupil, the front surface of the front lens element or the virtual front surface 1011. Should a simulation of the image creation by a lens be carried out using the simulation method described according to an exemplary embodiment of the disclosure, it is particularly advantageous to choose the entrance surface 3000 such that it includes the entrance pupil of the lens to be simulated or the front surface of the lens 1 to be simulated or the virtual front surface 1011, because in this case the light field approximated in the entrance surface 3000 can be used without a further transformation step for the simulation of the image creation.

    [0109] If the light field is known, the image creation by the lens 1 to be simulated can be modeled correctly with the aid of thereof. Knowledge of the light field or of the parts of the light field relevant to the image representation can be obtained by ray tracing. However, this is very computationally intensive and therefore slow. However, full knowledge of the light field is not necessarily required to calculate realistic image, a sufficiently good approximation sufficing instead.

    [0110] A method according to an exemplary embodiment of the disclosure for obtaining such a sufficiently good approximation is the method described below.

    [0111] A plurality of ideal images are generated at different positions of the entrance surface 3000. This can be implemented by calculating these images with a pinhole camera model for pinhole cameras attached to the respective positions. The respective creation of a depth map for the scene to be modeled in addition to the ideal images is also useful. Advantageously, the pinhole camera images can be arranged such that they are each located at the position at which an incident simulated light ray strikes the entrance surface 3000. In this case, from the knowledge of the direction of this beam, the information about the intensity and color of the light ray can be gathered directly from the pinhole camera image. Firstly, this can be achieved by virtue of pinhole camera images being generated at the respective positions and ray tracing subsequently being carried out through the lens, from the position of the pinhole camera to a certain pixel, or ray tracing is initially carried out to a certain position on the entrance surface 3000 and the corresponding pinhole camera image is generated subsequently.

    [0112] To increase the number of available pinhole camera images, new pinhole camera images can be calculated by interpolation or extrapolation from existing pinhole camera images and at least one associated depth map. Here, the use of at least one depth map is necessary to generate the interpolated pinhole camera images with correct perspective. These interpolated images can be generated, for example using a method known as “screen space ray tracing.”

    [0113] In principle, it is possible to carry out such interpolations already from a single pinhole camera image with an associated depth map. However, at least two, particularly typically three or more pinhole camera images are typical. FIG. 8 shows the situation in exemplary fashion for four, two and one rendered pinhole camera image 5100 and the interpolated or extrapolated pinhole camera images 5200, arising therefrom, in relation to the entrance pupil 5000.

    [0114] New pinhole camera images can also be obtained by artificial intelligence (AI) methods, with or without a depth map available, for example with the aid of a neural network.

    [0115] The positions of the pinhole cameras can particularly advantageously be arranged in one of the following setups:

    [0116] A fixed spiral grid of constant density, for example in the form of a so-called Fibonacci spiral as shown in FIG. 9. The positions of such pinhole cameras are typically arranged within the transmissive region of the entrance surface 3000.

    [0117] A purely random arrangement, as shown in FIG. 10.

    [0118] At least three pinhole cameras at positions outside of the transmissive region of the entrance surface 3000 such that the entrance pupil or the front surface or the virtual front surface 1011 is located within the polygon described by the positions of the pinhole cameras.

    [0119] The positions of the pinhole camera images to be used can be fitted to the scene to be modeled in order to have available a sufficient number of perspective views and in order to avoid artifacts. By way of example, a gaze through a thin pipe may require perspectives that look into the pipe and other perspectives that look at the pipe from the outside.

    [0120] Naturally, any other arrangements of the pinhole cameras are also comprised by the disclosure. Fixed positions with an approximately constant density improve the parallelizability of the method, in particular on GPU-assisted calculation systems, and reduce noise in the resultant image. The quality of the result image increases with the number of ideal images used. This applies in particular to regions with significant blur since the images of defocused object points, which form the bokeh, are particularly complex. It was found to be advantageous to heuristically adapt the density of the positions by virtue of using information from the depth map and comparing this to the focus setting, to be simulated, of the lens to be simulated. The position of the pinhole camera grid can also be rotated or disturbed randomly, which may lead to a higher image quality. It is also advantageous to statically or dynamically adapt the density of the pinhole cameras to a heuristic quality measure of the resultant picture element.

    [0121] The information about the light field is obtained by an interpolation of the information from the individual ideal images. This may be carried out individually for each wavelength required, or else only exactly for a single wavelength while the information for the other required wavelengths is approximated therefrom in differential fashion.

    [0122] The simulation of the generation of the image is implemented by simulating the contributions of all picture elements or pixels of the sensor 100 to be simulated. To this end, the light contributions of a pixel are integrated, by virtue of initially determining the direction of incidence of these rays on the entrance surface 3000 for a plurality of rays incident on this pixel. Rays that are shaded in the object 1 are discarded. The plurality of rays can typically be chosen such that the entrance surface 3000 is sufficiently uniformly penetrated by the rays. Another typical plurality of the rays can be chosen such that they intersect the entrance surface 3000 precisely at the positions for which ideal images are present. When converting interpolated ideal images into ray color and intensity, the inclusion of weighting factors may be advantageous since each image pixel corresponds to a light cone of a different size. By way of example, pixels right at the edge of the image recorder cover a smaller angular range than those in the center.

    [0123] This simulation is typically carried out using the described parameterized method according to an exemplary embodiment of the disclosure, but other simulations such as ray tracing can also be used.

    [0124] For each simulated ray, the brightness contribution of this ray at the corresponding pixel is deduced from knowledge of ray direction, position on the entrance surface 3000 and knowledge of the light field or the approximate light field. This is implemented by evaluating the respectively associated pinhole camera image. The brightness contributions of all simulated rays are summed at the respectively corresponding pixels, as a result of which the image of the scene to be modeled arises.

    [0125] The beam direction for a given point of incidence on the imaging system and on the entrance surface 3000 depends on the wavelength. To determine the brightness contribution of a light ray from a light field or from an approximated light field it is possible to carry out a separate calculation for each required wavelength or color. If the directional changes are only small, it may be sufficient to assume only one calculation rule for a main wavelength and to only make small changes to this calculation rule for other wavelengths. The relative position in the light field is first determined for a main wavelength W1 and the brightness contribution or intensity contribution is determined therefrom. Then, for the same points of incidence of the simulated light ray at the imaging system and at the entrance surface 3000, the utilized relative position at W1 is used as a starting point for searching for the correct relative position for the further wavelengths. Then, the brightness contribution or the brightness contributions can be calculated for one or more wavelengths. By way of example, this can be implemented by interpolation, typically linear interpolation for the selected wavelengths. By applying this procedure to a plurality of picture elements of the imaging system it is possible to obtain an image of the scene to be modeled, including polychromatic aberrations of the lens.

    [0126] The accuracy with which the beam direction is determined by the parametric optics is decisive for the quality of the resultant simulated image. This emerges from the laws of geometric optics, in particular from the intercept theorem. The directional differences of the ray for different wavelengths or colors in the case of the same points of incidence on the imaging system and on the entrance surface 3000 are linear in a first approximation. Depending on the chosen parametric representation, these differences can be fitted using a reduced set of parameters with an unchanged overall accuracy, as a result of which less computation time is required during the evaluation of the parametric function.

    [0127] FIG. 11 schematically shows in exemplary fashion the construction of an image simulation on the basis of an abstract stairway object and an associated pinhole camera image of this stairway object. Recording a set of objects 5600 arranged in stairway-type fashion is simulated, said objects being recorded by a camera 5500 such that the individual steps have different distances from the camera 5500. One of the steps 5500 is in the focus of the camera while the other steps are outside of the focus. An image of the stairway object 5700 recorded with an arbitrarily small aperture stop corresponds to a pinhole camera image with infinite depth of field. The individual steps are all imaged with the same sharpness in this case. FIG. 12 shows a comparison of three simulations using the method according to an exemplary embodiment of the disclosure.

    [0128] The image that arose from the simulation may be mixed by compositing with actually recorded images. According to an exemplar embodiment of the disclosure, the simulated image is generated by a lens simulation which simulates the lens used in a real recording, as a result of which the arising image impression of the mixed image is particularly harmonious.

    [0129] Overview of the Solutions According to an Exemplary Embodiment of the Disclosure

    [0130] A) A method for generating a brightness contribution for a picture element of an image by simulating an image representation of a scene using an optical imaging system which comprises an image recorder (100) located on a first surface (110) and a lens (1), comprising the following steps:

    [0131] providing a first data record (1010) comprising data which describe the effect on light rays (800) of the lens (1) to be simulated,

    [0132] providing a second data record (1020) comprising data about a point of incidence (111) of a light ray (800) on the image recorder (100) and about a virtual front surface (1011),

    [0133] providing a transformation rule (900),

    [0134] calculating a first point of intersection (1013) of the light ray (800) with the virtual front plane (1011) and a direction of the light ray (800) at the first point of intersection (1013) by way of applying the transformation rule (900) to the first data record (1010) and the second data record (1020),

    [0135] determining the brightness contribution of the light ray (800),

    [0136] storing an information item regarding the calculated brightness contribution of the light ray (800), wherein [0137] the first data record (1020) comprises data about a second surface (200) and [0138] the second data record comprises data about a second point of intersection (201) of the light beam (800) with the second surface (200).

    [0139] B) The method according to A), wherein

    [0140] the lens (1) has at least one adjustable imaging parameter (1022, 1023) and

    [0141] the second data record (1020) contains information items about the at least one adjustable imaging parameter (1022, 1023) of the lens (1).

    [0142] C) The method according to B), wherein the adjustable imaging parameter (1023) comprises the focus setting and/or the focal length and/or the magnification and/or the field curvature of the lens (1).

    [0143] D) The method according to A), B) or C), wherein the lens (1) comprises a stop, typically an aperture stop (3), and the second surface (200) coincides with the stop, typically with the aperture stop (3).

    [0144] E) The method according to D), wherein one of the adjustable imaging parameters (1022) describes at least one of the dimensions of the aperture stop (3).

    [0145] F) The method according to E), wherein the aperture stop (3) is at least approximately circular and the information items relating to the second point of intersection (201) of the light ray (800) with the second surface (200) contain a normalized radius.

    [0146] G) The method according to any one of A) to F), wherein the first data record (1010) comprises data

    [0147] in relation to at least one shading area (210) in the lens (1) and

    [0148] in relation to the effect of a part of the lens (1) on the at least one light beam (800),

    [0149] which extends between the at least one shading area (210) and the image recorder (100),

    and comprises the following steps before storing the brightness component of the at least one light ray (800):

    [0150] calculating a third point of intersection (211) with the at least one shading area (210),

    [0151] checking whether the at least one light ray (800) is absorbed by or transmitted through the third point of intersection (211),

    [0152] discarding the light ray (800) or setting the brightness component to zero if the at least one light ray (800) is absorbed.

    [0153] H) A method for generating a picture element of an image, comprising the following steps:

    [0154] selecting a point of incidence (111) for light rays (800) on the image recorder (100),

    [0155] selecting a plurality of different second points of intersection (201) on the second surface (200),

    [0156] carrying out ray tracing according to A) to G) for each of the second points of intersection (201),

    [0157] summing the brightness contributions arising, and

    [0158] storing a result of the summation.

    [0159] J) A method for generating an image, characterized by the following steps:

    [0160] selecting a plurality of picture elements on the image recorder,

    [0161] calculating the brightness contribution of the light ray (800) incident on each of the picture elements using the method according to H), and

    [0162] storing the results.

    [0163] K) The method according to J), wherein

    [0164] the brightness contribution of each of the light rays (800) simulated to this end, which intersect the virtual front surface (1011) at the first point of intersection (1013), is determined with the aid of a pinhole camera image in each case,

    [0165] the nature of the image being such that it corresponds to the image generated by a pinhole camera placed at the respective first point of intersection (1013).

    [0166] L) The method according to K), wherein the virtual front surface (1011) and the entrance pupil of the lens to be simulated coincide.

    [0167] M) The method according to L), wherein the second surface (200) coincides with the virtual front surface.

    [0168] P) The method according to any one of K), L) or M), wherein at least one of the pinhole camera images was calculated by interpolation or extrapolation from other pinhole camera images.

    [0169] Q) A method for generating an image, comprising the following steps:

    [0170] providing a real image recorded by a real camera,

    [0171] providing a virtual image generated according to any one of K), L), M) or P),

    [0172] fusing or overlaying at least a portion of the real image and at least a portion of the virtual image,

    [0173] storing the image created, wherein the adjustable lens parameters used for the simulation correspond at least approximately to those used during the real recording.

    [0174] R) A method for generating an image sequence consisting of individual images, comprising the following steps:

    [0175] providing a virtual scene,

    [0176] providing a camera position in relation to the virtual scene,

    [0177] calculating the individual images of the image sequence in accordance with one of the methods according to any one of K), L), M), P) or Q),

    [0178] storing the image sequence.

    [0179] S) A computer program product suitable for carrying out a method according to any one of A) to R) after being loaded onto a computer.

    [0180] It is understood that the foregoing description is that of the exemplary embodiments of the disclosure and that various changes and modifications may be made thereto without departing from the spirit and scope of the disclosure as defined in the appended claims.

    LIST OF REFERENCE NUMERALS

    [0181] 1 Lens [0182] 2 Lens elements [0183] 3 Aperture stop [0184] 4 Mount [0185] 5 Pixel [0186] 100 Image recorder/sensor [0187] 110 Sensor surface [0188] 111 Point of incidence of the simulated light rays on the sensor [0189] 112 Directional information item regarding the ray incident on the sensor [0190] 200 Second surface [0191] 201 Second point of intersection [0192] 210 Shaded surface [0193] 211 Third point of intersection [0194] 300 Computer [0195] 800 Simulated light ray [0196] 900 Transformation rule [0197] 901 Virtual lens [0198] 1000 Input data records [0199] 1010 First data record (“virtual lens”) [0200] 1010r, 1010g, 1010b Partial data records for different colors [0201] 1011 Virtual front surface [0202] 1012 Axis of symmetry [0203] 1013 First point of intersection [0204] 1020 Second data record [0205] 1021 Information items regarding a light ray to be simulated [0206] 1022 Information items regarding the set stop [0207] 1023 Information items regarding further imaging parameters that are adjustable on the lens [0208] 1024 Information items regarding the wavelength or color of the light to be simulated [0209] 1030 Partial data records [0210] 2000 Output data record [0211] 2010 Incident ray [0212] 3000 Entrance surface