METHOD AND APPARATUS FOR EMULATING CAMERA OBJECTIVES

20210185241 · 2021-06-17

    Inventors

    Cpc classification

    International classification

    Abstract

    The invention relates to a method for configuring an emulation method for emulating a second camera objective, comprising the steps: determining a plurality of first images that are recorded using a first camera objective; determining a plurality of second images that are recorded using the second camera objective and whose image content corresponds to the image content of the plurality of first images; and configuring the emulation method for emulating the second camera objective on the basis of the plurality of first images and the plurality of second images.

    Claims

    1. A method for configuring an emulation method for emulating a second camera objective, comprising the steps: determining a plurality of first images that are recorded using a first camera objective; determining a plurality of second images that are recorded using the second camera objective and whose image content corresponds to the image content of the plurality of first images; and configuring the emulation method for emulating the second camera objective on the basis of the plurality of first images and the plurality of second images.

    2. A method in accordance with claim 1, wherein the step of configuring the emulation method comprises determining parameters that represent the second camera objective.

    3. A method in accordance with claim 1, wherein the plurality of first images are recorded under predefined conditions; wherein the plurality of second images are recorded under the predefined conditions; wherein the predefined conditions comprise at least one of: at least one predefined lighting situation; at least one predefined camera objective aperture setting; at least one predefined camera objective focus setting; at least one predefined camera objective focal length setting; at least one camera sensor sensitivity setting; or at least one camera sensor white balance setting.

    4. A method in accordance with claim 1, wherein the method comprises the step of generating the plurality of first images and the step of generating the plurality of second images, wherein the plurality of first images and the plurality of second images are simultaneously recorded by two different image sensors.

    5. A method in accordance with claim 4, wherein, for the simultaneous recording of the plurality of first images and the plurality of second images, a respective image motif is imaged onto a first image sensor via a beam splitter by the first camera objective and is simultaneously imaged onto a second image sensor via the beam splitter by the second camera objective.

    6. A method in accordance with claim 1, wherein the emulation method uses a first artificial neural network.

    7. A method in accordance with claim 6, wherein the first artificial neural network has a deconvolutional neural network.

    8. A method in accordance with claim 1, wherein the emulation method is configured by optimizing a target function.

    9. A method in accordance with claim 8, wherein the target function is based on the extent to which a respective first image differs from a corresponding second image after a processing by the emulation method.

    10. A method in accordance with claim 8, wherein the target function is based on the extent to which a respective first image, after a processing by the emulation method, creates the impression on an observer of having been recorded by means of the second camera objective.

    11. A method in accordance with claim 8, wherein the target function is based on a second artificial neural network.

    12. A method in accordance with claim 11, wherein the second artificial neural network has a convolutional neural network.

    13. A method in accordance with claim 11, wherein the emulation method uses a first artificial neural network, and wherein the emulation method is configured on the basis of a zero-sum game between the first artificial neural network and the second artificial neural network.

    14. A method in accordance with claim 11, wherein the emulation method uses a first artificial neural network, and wherein the method further comprises the step: alternately optimizing the first artificial neural network and the second artificial neural network.

    15. A method for emulating a second camera objective, comprising the steps: determining an image recorded using a first camera objective; determining parameters that represent the second camera objective; and applying a processing rule to the determined image using the parameters in order to emulate the second camera objective.

    16. A method in accordance with claim 15, wherein the processing rule has an artificial neural network.

    17. A method in accordance with claim 16, wherein the artificial neural network has a deconvolutional neural network.

    18. A method in accordance with claim 15, wherein the method was configured in accordance with the method in accordance with claim 1.

    19. A method in accordance with claim 18, wherein the configuration of the method comprises determining the parameters.

    20. A method in accordance with claim 15, wherein the parameters are provided in encrypted form, wherein the step of determining the parameters comprises decrypting the parameters on the basis of a user key.

    21. A method in accordance with claim 15, wherein a plurality of parameter sets are provided that correspond to at least one of: a plurality of different second camera objectives; a plurality of different first camera objectives; a plurality of camera objective aperture settings; a plurality of camera objective focus settings; a plurality of camera objective focal length settings; a plurality of camera sensor sensitivity settings; or a plurality of camera sensor white balance settings; wherein the method further comprises the steps: selecting at least one of: the second camera objectives; the first camera objectives; a camera objective aperture setting; a camera objective focus setting; a camera objective focal length setting; a camera sensor sensitivity setting; or a camera sensor white balance setting; determining a parameter set that includes parameters that represent at least one of: the selected second camera objective; the selected first camera objective; the selected camera objective aperture setting; the selected camera objective focus setting; the selected camera objective focal length setting; the selected camera sensor sensitivity setting; or the selected camera sensor white balance setting; and applying the processing rule to the recorded image using the parameters of the determined parameter set in order to emulate the second camera objective.

    22. A method in accordance with claim 15, wherein, for the determination of the image, the first camera objective images an image motif onto an image sensor and the image sensor generates corresponding image signals.

    23. A computer program product comprising commands that, when executed by a computer, cause said computer to perform the method in accordance with claim 1.

    24. A motion picture camera that is configured to record a time sequence of frames and that has a control unit that is configured to process the frames in accordance with the method in accordance with claim 15.

    25. A motion picture camera in accordance with claim 24, wherein the control unit is configured to form an artificial neural network.

    26. A motion picture camera in accordance with claim 24, wherein the control unit has a decryption module that is configured to decrypt the parameters.

    27. A motion picture camera in accordance with claim 24, wherein the motion picture camera has a housing that accommodates an image sensor for generating image signals and that has an interchangeable lens mount to which the first camera objective is selectively fastenable.

    Description

    [0071] The invention will be described in the following with reference to an embodiment and to the drawings.

    [0072] FIG. 1 shows a system with an emulator;

    [0073] FIG. 2 shows a system for configuring an emulator;

    [0074] FIG. 3 shows a system for providing images by means of different objectives;

    [0075] FIG. 4 shows a system for configuring an emulator;

    [0076] FIG. 5 illustrates a desired mode of operation of an assessment unit; and

    [0077] FIG. 6 shows a system for configuring an assessment unit.

    [0078] Elements that are the same or of the same kind are marked by the same reference numerals in the drawings.

    [0079] FIG. 1 shows, in a schematic representation, a system 100 with an emulator 104 that, starting from a first image 102 that was recorded by a first objective (in other words, using the first objective), produces an emulated image 106 that creates the impression on a human observer of having been recorded by a second objective (different from the first objective). The image content of the emulated image 106 is in this respect identical to the image content of the first image 102. However, the emulated image 106 differs from the first image 102 by the characteristic impression of the respective objectives.

    [0080] In the images shown in the Figures, for a good legibility of the Figures, it is indicated in the upper right corner whether the image is an image (not processed by means of the emulation method) that was recorded by means of the first objective (“1” in the upper right corner), an image that was recorded by means of the second objective (“2” in the upper right corner), or an emulated image that was recorded by means of the first objective and that is intended to emulate the image impression of the second objective (“1->2” in the upper right corner).

    [0081] The emulator 104 may be configured (in other words: set up; in other words: trained) by machine learning. For example, the emulator 104 may be an (artificial) neural network.

    [0082] FIG. 2 shows, in a schematic representation, a system 200 for configuring the emulator 104. For this purpose, a second image 202 that was recorded by the second objective may be provided, and indeed such that the second image 202 shows the same content (that is, for example, the same scene or the same motif) as the first image 102. The first image 102 and the second image 202 therefore only differ in that the first image 102 was recorded by the first objective, while the second image 202 was recorded by the second objective, for example as is described further below with reference to FIG. 3.

    [0083] The emulated image 106, which represents an emulation of the second objective based on the first image 102 recorded by means of the first objective, may be compared with the second image 202, which was actually recorded by the second objective, using an assessment unit 204. The result of the comparison performed by the assessment unit 204 may be used to configure (as shown by the arrow 206) the emulator 104. During the configuration, parameters of the emulator 104 may be updated (for example, slightly adapted) on the basis of each pair of the first image 102 and the associated second image 202. The parameters of the emulator 104 may, for example, comprise links of individual neurons within an artificial neural network.

    [0084] In an embodiment not shown in FIG. 2, the first image 102 and the second image 202 may be directly transferred to a configuration unit that configures the emulator 104 on the basis of the first image 102 and the second image 202. A direct comparison of the emulated image 106 with the second image 202 is then not absolutely necessary.

    [0085] Even though only a respective one first image 102 and one second image 202 are shown in FIG. 2, a plurality of first images 102 and a plurality of second images 202 are typically necessary to configure the emulator 104, wherein one of the plurality of second images 202 is associated with each of the plurality of first images 102. The emulator 104 is therefore clearly configured on the basis of possible inputs in normal operation (that is on the first images 102) and on the desired outputs that the emulator 104 should generate in response to an input (that is the second images 202).

    [0086] FIG. 3 shows, in a schematic representation, a system 300 for providing images 102, 202 by means of different objectives 306, 308. A template 302, which is to be the content of the recorded images 102, 202 and which may also be designated as an image motif, may be imaged by means of a beam splitter 304 and a first objective 306 to obtain the first image 102. The template 302 may further be imaged by means of the beam splitter 304 and a second objective 308 in order to obtain the second image 202. The first objective 306 and the second objective 308 may have the same focal length so that the template 302 is at least substantially identically represented in the first image 102 and in the second image 202 (in other words: so that the image content of the first image 102 at least substantially corresponds to the image content of the second image 202). The first image 102 and the second image 202 therefore only differ by the imaging characteristics (that is the look) of the objectives used (the first objective 306 and the second objective 308). To exclude or to minimize an influence of the image detection unit (that is, for example, of the image sensor and image (pre)processing possibly taking place in the camera) on the images 102, 202, the first objective 306 and the second objective 308 may be operated at identical camera hardware (in particular with identical image sensors).

    [0087] The use of the beam splitter 304 for generating the two images 102, 202 that (at least substantially) have the same contents enables the simultaneous generation of the images, which is in particular of advantage for moving contents 302. However, in particular with static contents 302, the second image 202 may also be recorded directly (that is without using the beam splitter 304) using the second objective 308, and after or before this, the first image 102 may be recorded using the first objective 306 (without using the beam splitter 304).

    [0088] FIG. 4 shows, in a schematic representation, a system 400 for configuring the emulator 104. As indicated by the arrow 402, the training goal for the emulator 104 is that an image that was recorded by means of the first objective 306 and that was transformed into an emulated image 106 by means of the emulator 104 cannot be distinguished from an image from the second objective 308. The training goal for the assessment unit 204 is to train it such that it can nevertheless distinguish an image that was recorded by means of the first objective 306 and that was transformed into an emulated image 106 by means of the emulator 104 from an image from the second objective 308.

    [0089] With a (fixed or determined) first objective 306, a plurality of trainings may be performed by different second objectives 308 that then result in differently configured emulation methods (that is, for example, in different parameters or different parameter sets). It is then possible to store the different parameters (and to associate them with the (fixed or determined) first objective 306 and the respective second objective 308) and to use the one or another parameter set for the emulation method as required. It is thus possible to make the images from a first objective 306 appear like the images from the one or another second objective 308.

    [0090] With a (fixed or determined) second objective 308, a plurality of trainings may be performed by different first objectives 306 that then result in differently configured emulation methods (that is, for example, in different parameters (or different parameter sets)). It is then possible to store the different parameters (and to associate them with the respective first objective 306 and with the (fixed or determined) second objective 308) and to use the one or another parameter set for the emulation method as required. It is thus possible to make the images from different first objectives 306 appear like an image from the second objective 308.

    [0091] FIG. 5 shows an illustration 500 of the desired mode of operation of the assessment unit 204. The assessment unit 204 processes an image as input and outputs information on whether the image was recorded by means of the second objective or not. In the examples shown in FIG. 5, the assessment unit 204 outputs either “yes” (represented by a check mark) or “no” (represented by an X) as information on whether the image was recorded by means of the second objective or not. In general, however, instead of information that represents “yes” or “no” (in a binary manner), the information on whether the image was recorded by means of the second objective or not may also include information in the form of a probability (for example in percent, for example represented by a real or natural number between 0 and 1 or between 0% and 100% or by a “fuzzy” statement such as “very definitely yes”, “probably yes”, “no exact statement possible”, “probably no”, “very definitely no”). For example, the information on whether the image was recorded by means of the second objective may, for example, be indicated by a pixel-wise spacing from an image that was actually recorded by the second objective. Alternatively, the information on whether the image was recorded by means of the second objective may include information on how similar the effect of the image on a human observer is to an image that was actually recorded by the second objective.

    [0092] On the input of the second image 202 recorded by means of the second objective, the information output by the assessment unit 204 on whether the second image 202 was recorded by means of the second objective or not should indicate that the second image 202 was recorded by means of the second objective, as represented by the check mark 502. On the input of the image 102 recorded by means of the first objective, the information output by the assessment unit 204 on whether the first image 102 was recorded by means of the second objective or not should indicate that the image 102 was not recorded by means of the second objective, as represented by an X 504. On the input of the image generated by the emulator 104 (that is the emulated image 106), the information output by the assessment unit 204 on whether the image 106 generated by the emulator 104 was recorded by means of the second objective or not should indicate that the image 106 generated by the emulator 104 was not recorded by means of the second objective, as represented by an X 506.

    [0093] As described further above, the emulator 104 may be configured (in other words, trained). The assessment unit 204 may also be configured (in other words, trained). In this respect, the goal is to configure the emulator 104 such that it generates images that cannot be distinguished from images that were recorded by means of the second objective. The goal of the training of the assessment unit 204 is to configure it such that it can identify any image that was not recorded by means of the second objective. In this respect, the assessment unit 204 should also be able to identify images that were generated by means of the emulator 104 as not recorded by means of the second objective. The emulator 104 and the assessment unit 204 are therefore in competition with one another in this sense so that the configuration of the emulator 104 and the configuration of the assessment unit 204 may be designated as a zero-sum game. The “better” the emulator 104 is, the more difficult it is for the assessment unit 204 to identify an emulated image as not actually recorded by means of the second objective.

    [0094] The assessment unit 204 may be configured (in other words: set up; in other words: trained) by machine learning. For example, the assessment unit 204 may be an (artificial) neural network.

    [0095] FIG. 6 shows, in a schematic representation, a system 600 for configuring the assessment unit 204. Images 602 and associated information 604 on whether the respective image 602 was recorded by means of the second objective or not are provided as a data input for the configuration of the assessment unit 204. The quantity of the images 602 and of the associated information 604 may also be designated as a training data set.

    [0096] Each image 602 is fed to the assessment unit 204 and the assessment unit 204 outputs information 606 on whether the image 602 (after an assessment by the assessment unit 204) was recorded by means of the second objective or not. In a comparison unit 608, the information 606 that was output by the assessment unit 204 is processed, for example compared, together with the information 604 that indicates whether the image 602 was actually recorded by means of the second objective or not, and the assessment unit 204 is configured on the basis of the processing, as represented by the arrow 610. In this respect, the configuration may comprise determining parameters which the assessment unit 204 uses. The parameters may be updated (for example, slightly adapted) on the basis of each image 602 to be processed and associated information 604, for example using a gradient method. The configuration may clearly comprise setting the parameters of the assessment unit 204 such that the input information 604 and the information 604 determined by the assessment unit 204 may coincide as well as possible for as many input images 602 as possible.

    REFERENCE NUMERAL LIST

    [0097] 100 system with an emulator

    [0098] 102 first image

    [0099] 104 emulator

    [0100] 106 emulated image

    [0101] 200 system for configuring the emulator

    [0102] 202 second image

    [0103] 204 assessment unit

    [0104] 206 arrow that illustrates configuration

    [0105] 300 system for providing images

    [0106] 302 template

    [0107] 304 beam splitter

    [0108] 306 first objective

    [0109] 308 second objective

    [0110] 400 system for configuring the emulator

    [0111] 402 training goal for the assessment unit

    [0112] 500 illustration of the desired mode of operation of the assessment unit

    [0113] 502 check mark

    [0114] 504 X

    [0115] 506 X

    [0116] 600 system for configuring the assessment unit

    [0117] 602 images

    [0118] 604 information belonging to images

    [0119] 606 determined information on whether the image was recorded by means of the second objective or not

    [0120] 608 comparison unit

    [0121] 610 arrow that illustrates configuration