CAMERA OF A MOBILE DEVICE FOR GENERATING A TELEPHOTO IMAGE REPRESENTATION

20260094242 ยท 2026-04-02

    Inventors

    Cpc classification

    International classification

    Abstract

    A camera of a mobile device includes at least two entrance openings and at least two image sensors. A first entrance opening is assigned to a first image sensor via a first imaging path and a second entrance opening is assigned to a second image sensor via a second imaging path. Each of the entrance openings has a light entrance surface with a longitudinal direction and a transverse direction running perpendicular thereto. The length of the entrance opening in the longitudinal direction is at least 1.2 times larger than the width of the entrance opening in the transverse direction. The first imaging path and the second imaging path each include anamorphic optics. In addition, a mobile device including the camera, and a method for generating an image representation with the camera are provided.

    Claims

    1. A camera of a mobile device, the camera comprising: at least two entrance openings; and at least two image sensors, a first entrance opening of the at least two entrance openings being assigned to a first image sensor of the at least two image sensors via a first imaging path and a second entrance opening of the at least two entrance openings being assigned to a second image sensor of the at least two image sensors via a second imaging path, wherein: each of the at least two entrance openings has a light entrance surface with a longitudinal direction and a transverse direction running perpendicularly to the longitudinal direction, a length of each of the at least two entrance openings in the longitudinal direction is at least by a factor of 1.2 larger than a width of each of the at least two entrance openings in the transverse direction, and each of the first imaging path and the second imaging path includes an anamorphic optical unit which form an anamorphic system.

    2. The camera as claimed in claim 1, wherein the first entrance opening and the second entrance opening are arranged geometrically with respect to one another such that the longitudinal direction of the first entrance opening and the longitudinal direction of the second entrance opening form an angle () of between 70 degrees and 110 degrees.

    3. The camera as claimed in claim 1, wherein the camera includes an image processing device configured to: receive image data captured by the at least two image sensors, generate transformed image data by transforming the image data received from the at least two image sensors with Fourier transformation, generate a common data set from the image data after transforming the image data, and generate inverse-transformed image data by inverse transforming the common data set with Fourier transformation.

    4. The camera as claimed in claim 3, wherein, to generate the common data set, the image processing device is further configured to: partly mask the transformed image data from the at least two image sensors such that the transformed image data mutually supplement and/or partly overlap one another, and/or select transformed image data partial regions such that the transformed image data mutually supplement and/or partly overlap one another.

    5. The camera as claimed in claim 3, wherein the image processing device is further configured to: correct artefacts and/or aberrations in an image representation generated with the inverse-transformed image data, and/or supplement image data in Fourier spectral ranges not captured by the at least two image sensors.

    6. The camera as claimed in claim 5, wherein the image processing device is further configured to correct artefacts and/or aberrations, and/or supplement the image data in the image representation generated with the inverse-transformed image data, with a neural network.

    7. The camera as claimed in claim 3, wherein the image processing device is configured to perform pixel binning.

    8. The camera as claimed in claim 1, further comprising a telephoto optical unit arranged in each of the first imaging path and/or the second imaging path.

    9. The camera as claimed in claim 1, wherein a first anamorphic optical unit is arranged in the first imaging path and has a first focal length, wherein a second anamorphic optical unit is arranged in the second imaging path and has a second focal length, wherein the first and second anamorphic optical units are configured such that a parallax error resulting from a positioning of the first and second entrance openings is reduced for objects at a distance which is less than 100 times the smaller of the first and second focal lengths of the anamorphic system.

    10. The camera as claimed claim 1, wherein the camera has a field of view of at least 10 degrees.

    11. The camera as claimed in claim 1, wherein the at least two entrance openings and/or the at least two image sensors have a rectangular cross-sectional area, and/or wherein the at least two entrance openings have geometrically differing cross-sectional areas.

    12. The mobile device comprising the camera as claimed in claim 1.

    13. The mobile device as claimed in claim 12, wherein the mobile device is a cellular phone, a tablet, a notebook, a smartwatch, or a netbook.

    14. A method for generating an image representation with the camera as claimed in claim 1, the method comprising: capturing image data with the at least two image sensors; generating transformed image data by transforming the image data with Fourier transformation; generating a common data set from the transformed image data; and inverse transforming the common data set with the Fourier transformation.

    15. The method as claimed in claim 14, wherein generating a common data set from the transformed image data comprises at least one of combining, masking, cutting out, selecting, and superimposing specific image data regions.

    16. The method as claimed in claim 14, further comprising: correcting artefacts and/or aberrations in the image representation, and/or supplementing items of image information not captured in a frequency domain in the image representation.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0028] The disclosure will now be described with reference to the drawings wherein:

    [0029] FIG. 1 schematically shows a camera of a mobile device in a plan view according to an exemplary embodiment of the disclosure,

    [0030] FIG. 2 schematically shows a camera of a mobile device in the form of a block diagram according to an exemplary embodiment of the disclosure,

    [0031] FIG. 3 schematically shows the simulated beam path of one of the optical paths in a perspective view,

    [0032] FIG. 4 schematically shows the beam path shown in FIG. 3 in a side view,

    [0033] FIG. 5 schematically shows the step of capturing image data,

    [0034] FIG. 6 schematically shows the captured image data transformed with Fourier transformation,

    [0035] FIG. 7 schematically shows the step of masking specific image data regions,

    [0036] FIG. 8 schematically shows the steps of generating a common data set and inverse transforming,

    [0037] FIG. 9 schematically shows the effect of diffraction effects on the captured image data and the reduction thereof, and

    [0038] FIG. 10 schematically shows the effect of the aspect ratio on the formation of artefacts.

    DESCRIPTION OF EXEMPLARY EMBODIMENTS

    [0039] The disclosure is explained in larger detail below on the basis of exemplary embodiments with reference to the accompanying figures. Although the disclosure is more specifically illustrated and described in detail with the exemplary embodiments, nevertheless the disclosure is not restricted by the exemplary embodiments disclosed, and other variations can be derived therefrom by a person skilled in the art, without departing from the scope of protection of the disclosure.

    [0040] The figures are not necessarily accurate in every detail and to scale and can be presented in enlarged or reduced form for the purpose of better clarity. For this reason, functional details disclosed here should not be understood to be limiting, but merely to be an illustrative basis that gives guidance to a person skilled in this technical field for using the present disclosure in various ways.

    [0041] The expression and/or used here, when it is used in a series of two or more elements, means that any of the elements listed can be used alone, or any combination of two or more of the elements listed can be used. For example, if a structure is described containing the components A, B and/or C, the structure can contain A alone; B alone; C alone; A and B in combination; A and C in combination; B and C in combination; or A, B, and C in combination.

    [0042] FIG. 1 schematically shows a camera 20 of a mobile device 1 in a plan view according to an exemplary embodiment of the disclosure. The mobile device 1 can be a cellular phone, for example. The camera 20 shown includes a first entrance opening 2 and a second entrance opening 3. In the exemplary embodiment shown in FIG. 1, the entrance openings 2 and 3 are configured identically and each have a longitudinal direction 13 and a transverse direction 14 running perpendicularly thereto. The length 15 in the longitudinal direction 13 of the entrance openings 2 and 3 is in each case at least by a factor of 1.2, typically by at least a factor of 2, larger than the width 16 in the transverse direction 14.

    [0043] The longitudinal directions 13 or center lines 12 running in the longitudinal direction 13 of the entrance openings 2 and 3 form an angle which is typically between 70 degrees and 110 degrees and is 90 degrees in the exemplary embodiment shown.

    [0044] In the exemplary embodiment shown in FIG. 1, the entrance openings 2 and 3 are arranged next to one another and offset with respect to one another. As an alternative thereto, a T-shaped arrangement or a non-offset L-shaped arrangement is also possible.

    [0045] FIG. 2 schematically shows a camera 20 of a mobile device 1 in the form of a block diagram. The camera 20 shown includes at least two image sensors 6 and 7, the first entrance opening 2 being assigned to a first image sensor 6 via a first imaging path 8 and a second entrance opening 3 being assigned to a second image sensor 7 via a second imaging path 9. The first imaging path 8 and the second imaging path 9 each include an anamorphic optical unit 4 and 5, respectively, and a telephoto optical unit (not explicitly shown).

    [0046] Optionally, the camera 20 includes an image processing device 10 configured for receiving image data captured with the aid of the image sensors 6 and 7, and for processing said image data. The data transfer is identified by arrows with the reference sign 11. The image processing device 10 is configured to transform the received image data from the image sensors 2, 3 with Fourier transformation (see FIGS. 5 and 6), to generate a common data set from the transformed image data (see FIG. 7), i.e., to combine the transformed image data to form a common data set, and to inverse transform the generated common data set with Fourier transformation (see FIG. 8). Neural networks, as already described above, can be used for correcting artefacts and/or aberrations.

    [0047] FIG. 3 schematically shows the beam path 17 of one of the optical paths 8, 9 of the camera 20 in a perspective view, this beam path having been simulated using the software Zemax. FIG. 4 schematically shows the beam path 17 shown in FIG. 3 in a side view. Light 17 entering the camera 20 through the rectangular entrance opening 2, 3 is reflected into a plane of the mobile device 20 with a mirror 18 and is subsequently guided in this plane to the rectangular image sensor 6, 7. In the exemplary embodiment shown, the image sensor 6, 7 is arranged perpendicularly to the plane of the mobile device 20, i.e., vertically. As an alternative thereto, an arrangement of the image sensor 6, 7 in the plane of the mobile device 20, i.e., horizontally, is also possible.

    [0048] An anamorphic optical unit 4, 5 is arranged in the beam path 17 between the entrance opening 2, 3 or the mirror 18 and the image sensor 6, 7. With the anamorphic optical unit 4, 5, the image or the image representation is distorted and the field of view or the FOV is enlarged in this way. In the exemplary embodiment shown in FIGS. 3 and 4, further optical elements, for example prisms and/or mirrors 19, are additionally arranged in the beam path, and bring about a folding of the beam path 17.

    [0049] In principle, the telephoto lenses necessary for generating a telephoto image representation, or a corresponding telephoto optical unit, require(s) a large entrance opening. On account of the limited installation space in mobile devices, such as cellular phones, for example, large entrance openings cannot be realized even when there is a folded beam path, in particular since the height of the mirror 18 necessary for folding the beam path is limited by the thickness or depth of the mobile device. This holds true particularly in the case of entrance openings configured in square fashion and image sensors configured in square fashion. A rectangular configuration of the entrance opening makes it possible at least to increase the effective size of the entrance opening. However, diffraction-governed artefacts occur in the case of relatively large aspect ratios, in particular larger than 3:2.

    [0050] In the exemplary embodiment shown, an aspect ratio of 3:1 is used for the two entrance openings 3 and 4 and the two image sensors 6 and 7. The anamorphic optical unit 4, 5 additionally used can bring about a stretching of the image representation of 2:1, for example, whereby the height of the respective image sensor 6, 7 can be halved in comparison with a square configuration (for example from 1010 mm to 105 mm) by virtue of the image representation or the image being compressed in the diffraction direction. Both measures, i.e., firstly the increase of the aspect ratio and secondly the use of an anamorphic design, make it possible to integrate a telephoto system having a small f-number and a large FOV into a mobile device, for example a cellular phone.

    [0051] A method for generating an enlarged image representation, i.e., a telephoto image representation, with a camera, for example a camera described with reference to FIGS. 1 to 4, is explained in larger detail below with reference to FIGS. 5 to 10. In this case, a simulation on the basis of a paraxial system with two entrance openings 2, 3 each having an aspect ratio of 15:1 is used for the sake of better elucidation.

    [0052] In a first step, shown schematically in FIG. 5, image data, of the capital letter F in the present case, are captured with the two image sensors 6, 7. In this case, depending on the orientation of the entrance openings 2 and 3, a blur 29 (not able to be depicted well in the figures) occurs in a diffraction-governed manner. In addition, the image representations are compressed anamorphically in the diffraction direction.

    [0053] Afterward, in a second step, the captured image data are transformed with Fourier transformation. This is shown schematically in FIG. 6. The transformed image data (Fourier spectrum) depicted schematically are identified by the reference signs 21 and 22. The Fourier spectrum has high intensities in the regions 27. In regard to a square Fourier spectrum of an imaginary square image sensor, regions for which no image information has been captured are identified by the reference signs 23. The arrows 24 identify regions in which contributions of higher spatial frequencies are lost depending on the orientation of the entrance openings 2 and 3.

    [0054] In a further step, shown schematically in FIG. 7, the transformed image data 21 and 22 are masked and/or partial regions thereof are cut out. Afterward, the transformed and masked image data 21 and 22 shown in FIG. 7 are combined to form a common data set 25, the regions 23 with diffraction-governed loss of information being ignored or suppressed. This step is shown schematically on the left in FIG. 8. In practice, the masking can also be dispensed with and the relevant regions are directly combined. Furthermore, individual image data regions at the edges 28 from the different image data sets 21 and 22 can be made to overlap one another or superimposed on one another. As a result, visible transitions can be avoided, and the image quality overall can be improved.

    [0055] In a further step, shown in FIG. 8, the generated common data set 25 is inverse transformed with Fourier transformation. The result is the telephoto image representation-shown on the right in FIG. 8of the capital letter F recorded with the camera 20. Advantageously, it is possible to correct artefacts and/or aberrations in the generated image representation 26 and/or to supplement missing regions 23 in the corners (see FIG. 8, on the left) in the generated image representation 26, for example with neural networks. In addition, the regions at which the transformed image data 21 and 22 were combined can be smoothed and/or corrected in regard to image aberrations, e.g., with rounded step functions.

    [0056] FIG. 9 illustrates in summary the effect of diffraction effects at the entrance openings on the image data captured with the image sensors 6 and 7, and also the significant reduction thereof in the image representation 26 generated according to the disclosure.

    [0057] FIG. 10 shows the effect of the aspect ratio of 15:1 of an entrance opening 2, 3 in comparison with an entrance opening 2, 3 having an aspect ratio of 3:1. The remaining artefacts are significantly reduced in the case of an aspect ratio of 3:1.

    LIST OF REFERENCE NUMERALS

    [0058] 1 mobile device [0059] 2 entrance opening [0060] 3 entrance opening [0061] 4 anamorphic optical unit [0062] 5 anamorphic optical unit [0063] 6 image sensor [0064] 7 image sensor [0065] 8 imaging path [0066] 9 imaging path [0067] 10 image processing device [0068] 11 data transfer [0069] 12 center line [0070] 13 longitudinal direction [0071] 14 transverse direction [0072] 15 length [0073] 16 width [0074] 17 beam path [0075] 18 mirror [0076] 19 prism/mirror [0077] 20 camera [0078] 21 transformed image data [0079] 22 transformed image data [0080] 23 regions with diffraction-governed loss of information [0081] 24 regions with missing higher spatial frequencies [0082] 25 common data set [0083] 26 image representation generated according to the disclosure [0084] 27 region with high intensities [0085] 28 image data regions for overlap [0086] 29 blur [0087] angle