METHOD AND APPARATUS FOR EMULATING CAMERA OBJECTIVES
20210185241 · 2021-06-17
Inventors
Cpc classification
H04N5/2621
ELECTRICITY
G03B43/00
PHYSICS
International classification
H04N5/262
ELECTRICITY
Abstract
The invention relates to a method for configuring an emulation method for emulating a second camera objective, comprising the steps: determining a plurality of first images that are recorded using a first camera objective; determining a plurality of second images that are recorded using the second camera objective and whose image content corresponds to the image content of the plurality of first images; and configuring the emulation method for emulating the second camera objective on the basis of the plurality of first images and the plurality of second images.
Claims
1. A method for configuring an emulation method for emulating a second camera objective, comprising the steps: determining a plurality of first images that are recorded using a first camera objective; determining a plurality of second images that are recorded using the second camera objective and whose image content corresponds to the image content of the plurality of first images; and configuring the emulation method for emulating the second camera objective on the basis of the plurality of first images and the plurality of second images.
2. A method in accordance with claim 1, wherein the step of configuring the emulation method comprises determining parameters that represent the second camera objective.
3. A method in accordance with claim 1, wherein the plurality of first images are recorded under predefined conditions; wherein the plurality of second images are recorded under the predefined conditions; wherein the predefined conditions comprise at least one of: at least one predefined lighting situation; at least one predefined camera objective aperture setting; at least one predefined camera objective focus setting; at least one predefined camera objective focal length setting; at least one camera sensor sensitivity setting; or at least one camera sensor white balance setting.
4. A method in accordance with claim 1, wherein the method comprises the step of generating the plurality of first images and the step of generating the plurality of second images, wherein the plurality of first images and the plurality of second images are simultaneously recorded by two different image sensors.
5. A method in accordance with claim 4, wherein, for the simultaneous recording of the plurality of first images and the plurality of second images, a respective image motif is imaged onto a first image sensor via a beam splitter by the first camera objective and is simultaneously imaged onto a second image sensor via the beam splitter by the second camera objective.
6. A method in accordance with claim 1, wherein the emulation method uses a first artificial neural network.
7. A method in accordance with claim 6, wherein the first artificial neural network has a deconvolutional neural network.
8. A method in accordance with claim 1, wherein the emulation method is configured by optimizing a target function.
9. A method in accordance with claim 8, wherein the target function is based on the extent to which a respective first image differs from a corresponding second image after a processing by the emulation method.
10. A method in accordance with claim 8, wherein the target function is based on the extent to which a respective first image, after a processing by the emulation method, creates the impression on an observer of having been recorded by means of the second camera objective.
11. A method in accordance with claim 8, wherein the target function is based on a second artificial neural network.
12. A method in accordance with claim 11, wherein the second artificial neural network has a convolutional neural network.
13. A method in accordance with claim 11, wherein the emulation method uses a first artificial neural network, and wherein the emulation method is configured on the basis of a zero-sum game between the first artificial neural network and the second artificial neural network.
14. A method in accordance with claim 11, wherein the emulation method uses a first artificial neural network, and wherein the method further comprises the step: alternately optimizing the first artificial neural network and the second artificial neural network.
15. A method for emulating a second camera objective, comprising the steps: determining an image recorded using a first camera objective; determining parameters that represent the second camera objective; and applying a processing rule to the determined image using the parameters in order to emulate the second camera objective.
16. A method in accordance with claim 15, wherein the processing rule has an artificial neural network.
17. A method in accordance with claim 16, wherein the artificial neural network has a deconvolutional neural network.
18. A method in accordance with claim 15, wherein the method was configured in accordance with the method in accordance with claim 1.
19. A method in accordance with claim 18, wherein the configuration of the method comprises determining the parameters.
20. A method in accordance with claim 15, wherein the parameters are provided in encrypted form, wherein the step of determining the parameters comprises decrypting the parameters on the basis of a user key.
21. A method in accordance with claim 15, wherein a plurality of parameter sets are provided that correspond to at least one of: a plurality of different second camera objectives; a plurality of different first camera objectives; a plurality of camera objective aperture settings; a plurality of camera objective focus settings; a plurality of camera objective focal length settings; a plurality of camera sensor sensitivity settings; or a plurality of camera sensor white balance settings; wherein the method further comprises the steps: selecting at least one of: the second camera objectives; the first camera objectives; a camera objective aperture setting; a camera objective focus setting; a camera objective focal length setting; a camera sensor sensitivity setting; or a camera sensor white balance setting; determining a parameter set that includes parameters that represent at least one of: the selected second camera objective; the selected first camera objective; the selected camera objective aperture setting; the selected camera objective focus setting; the selected camera objective focal length setting; the selected camera sensor sensitivity setting; or the selected camera sensor white balance setting; and applying the processing rule to the recorded image using the parameters of the determined parameter set in order to emulate the second camera objective.
22. A method in accordance with claim 15, wherein, for the determination of the image, the first camera objective images an image motif onto an image sensor and the image sensor generates corresponding image signals.
23. A computer program product comprising commands that, when executed by a computer, cause said computer to perform the method in accordance with claim 1.
24. A motion picture camera that is configured to record a time sequence of frames and that has a control unit that is configured to process the frames in accordance with the method in accordance with claim 15.
25. A motion picture camera in accordance with claim 24, wherein the control unit is configured to form an artificial neural network.
26. A motion picture camera in accordance with claim 24, wherein the control unit has a decryption module that is configured to decrypt the parameters.
27. A motion picture camera in accordance with claim 24, wherein the motion picture camera has a housing that accommodates an image sensor for generating image signals and that has an interchangeable lens mount to which the first camera objective is selectively fastenable.
Description
[0071] The invention will be described in the following with reference to an embodiment and to the drawings.
[0072]
[0073]
[0074]
[0075]
[0076]
[0077]
[0078] Elements that are the same or of the same kind are marked by the same reference numerals in the drawings.
[0079]
[0080] In the images shown in the Figures, for a good legibility of the Figures, it is indicated in the upper right corner whether the image is an image (not processed by means of the emulation method) that was recorded by means of the first objective (“1” in the upper right corner), an image that was recorded by means of the second objective (“2” in the upper right corner), or an emulated image that was recorded by means of the first objective and that is intended to emulate the image impression of the second objective (“1->2” in the upper right corner).
[0081] The emulator 104 may be configured (in other words: set up; in other words: trained) by machine learning. For example, the emulator 104 may be an (artificial) neural network.
[0082]
[0083] The emulated image 106, which represents an emulation of the second objective based on the first image 102 recorded by means of the first objective, may be compared with the second image 202, which was actually recorded by the second objective, using an assessment unit 204. The result of the comparison performed by the assessment unit 204 may be used to configure (as shown by the arrow 206) the emulator 104. During the configuration, parameters of the emulator 104 may be updated (for example, slightly adapted) on the basis of each pair of the first image 102 and the associated second image 202. The parameters of the emulator 104 may, for example, comprise links of individual neurons within an artificial neural network.
[0084] In an embodiment not shown in
[0085] Even though only a respective one first image 102 and one second image 202 are shown in
[0086]
[0087] The use of the beam splitter 304 for generating the two images 102, 202 that (at least substantially) have the same contents enables the simultaneous generation of the images, which is in particular of advantage for moving contents 302. However, in particular with static contents 302, the second image 202 may also be recorded directly (that is without using the beam splitter 304) using the second objective 308, and after or before this, the first image 102 may be recorded using the first objective 306 (without using the beam splitter 304).
[0088]
[0089] With a (fixed or determined) first objective 306, a plurality of trainings may be performed by different second objectives 308 that then result in differently configured emulation methods (that is, for example, in different parameters or different parameter sets). It is then possible to store the different parameters (and to associate them with the (fixed or determined) first objective 306 and the respective second objective 308) and to use the one or another parameter set for the emulation method as required. It is thus possible to make the images from a first objective 306 appear like the images from the one or another second objective 308.
[0090] With a (fixed or determined) second objective 308, a plurality of trainings may be performed by different first objectives 306 that then result in differently configured emulation methods (that is, for example, in different parameters (or different parameter sets)). It is then possible to store the different parameters (and to associate them with the respective first objective 306 and with the (fixed or determined) second objective 308) and to use the one or another parameter set for the emulation method as required. It is thus possible to make the images from different first objectives 306 appear like an image from the second objective 308.
[0091]
[0092] On the input of the second image 202 recorded by means of the second objective, the information output by the assessment unit 204 on whether the second image 202 was recorded by means of the second objective or not should indicate that the second image 202 was recorded by means of the second objective, as represented by the check mark 502. On the input of the image 102 recorded by means of the first objective, the information output by the assessment unit 204 on whether the first image 102 was recorded by means of the second objective or not should indicate that the image 102 was not recorded by means of the second objective, as represented by an X 504. On the input of the image generated by the emulator 104 (that is the emulated image 106), the information output by the assessment unit 204 on whether the image 106 generated by the emulator 104 was recorded by means of the second objective or not should indicate that the image 106 generated by the emulator 104 was not recorded by means of the second objective, as represented by an X 506.
[0093] As described further above, the emulator 104 may be configured (in other words, trained). The assessment unit 204 may also be configured (in other words, trained). In this respect, the goal is to configure the emulator 104 such that it generates images that cannot be distinguished from images that were recorded by means of the second objective. The goal of the training of the assessment unit 204 is to configure it such that it can identify any image that was not recorded by means of the second objective. In this respect, the assessment unit 204 should also be able to identify images that were generated by means of the emulator 104 as not recorded by means of the second objective. The emulator 104 and the assessment unit 204 are therefore in competition with one another in this sense so that the configuration of the emulator 104 and the configuration of the assessment unit 204 may be designated as a zero-sum game. The “better” the emulator 104 is, the more difficult it is for the assessment unit 204 to identify an emulated image as not actually recorded by means of the second objective.
[0094] The assessment unit 204 may be configured (in other words: set up; in other words: trained) by machine learning. For example, the assessment unit 204 may be an (artificial) neural network.
[0095]
[0096] Each image 602 is fed to the assessment unit 204 and the assessment unit 204 outputs information 606 on whether the image 602 (after an assessment by the assessment unit 204) was recorded by means of the second objective or not. In a comparison unit 608, the information 606 that was output by the assessment unit 204 is processed, for example compared, together with the information 604 that indicates whether the image 602 was actually recorded by means of the second objective or not, and the assessment unit 204 is configured on the basis of the processing, as represented by the arrow 610. In this respect, the configuration may comprise determining parameters which the assessment unit 204 uses. The parameters may be updated (for example, slightly adapted) on the basis of each image 602 to be processed and associated information 604, for example using a gradient method. The configuration may clearly comprise setting the parameters of the assessment unit 204 such that the input information 604 and the information 604 determined by the assessment unit 204 may coincide as well as possible for as many input images 602 as possible.
REFERENCE NUMERAL LIST
[0097] 100 system with an emulator
[0098] 102 first image
[0099] 104 emulator
[0100] 106 emulated image
[0101] 200 system for configuring the emulator
[0102] 202 second image
[0103] 204 assessment unit
[0104] 206 arrow that illustrates configuration
[0105] 300 system for providing images
[0106] 302 template
[0107] 304 beam splitter
[0108] 306 first objective
[0109] 308 second objective
[0110] 400 system for configuring the emulator
[0111] 402 training goal for the assessment unit
[0112] 500 illustration of the desired mode of operation of the assessment unit
[0113] 502 check mark
[0114] 504 X
[0115] 506 X
[0116] 600 system for configuring the assessment unit
[0117] 602 images
[0118] 604 information belonging to images
[0119] 606 determined information on whether the image was recorded by means of the second objective or not
[0120] 608 comparison unit
[0121] 610 arrow that illustrates configuration