Abstract
A view system (100A, 100B) for a vehicle (1) has an image capture unit (10, 10A, 10B) for capturing image data of a portion around a vehicle (1), an image processing unit (20A, 20B) for processing the captured image data, and an image reproduction unit (30, 30A, 30B) for reproducing the processed image data. Included are an image sensor (11) and an optical element having a distortion curve. The image data are distorted dependent on the position of the image sensor according to the distortion curve. The image processing unit (20A, 20B) is adapted for taking at least one partial portion (12, 12′) from the image data of the image sensor (11), wherein the geometry of the partial portion (12, 12′) is dependent on the position of the partial portion (12, 12′) on the image sensor (11).
Claims
1. A view system for a vehicle, comprising: at least one image capture unit for capturing image data of a portion around a vehicle, wherein the image capture unit has an image sensor and an optical element, wherein the optical element has a distortion curve, wherein image data dependent on the position on the image sensor are distorted in accordance with the distortion curve of the optical element; at least one image processing unit for processing the image data captured by the image capture unit; and at least one reproduction unit for reproducing the image data processed by the image processing unit, wherein the image processing unit is configured for taking at least one partial portion from the image data of the image sensor, wherein the geometry of the partial portion is dependent on the position of the partial portion on the image sensor, and wherein the geometry of the partial portion is determined by a calculation algorithm which uses at least one compensation curve (K), with which the width (P1′) and/or the length (P2′) of the partial portion on the image sensor is determined dependent on a change in position of the depiction angle (α, α1, α2) of the image capture unit corresponding to the partial portion.
2. The view system according to claim 1, wherein the partial portion defines a reference partial portion with a reference geometry.
3. The view system according to claim 2, wherein the partial portion defines a reference partial portion, if the partial portion is taken from the image sensor in the vicinity of a position which corresponds to the depiction of a center of distortion of the image capture unit on the image sensor.
4. The view system according to claim 3, wherein the partial portion is taken from the image sensor at a position or directly adjacent to a position which corresponds to the depiction of the center of distortion of the image capture unit on the image sensor.
5. The view system according to claim 3, wherein the center of distortion is the optical axis.
6. The view system according to claim 2, wherein the reference geometry is a rectangle with a width (P1) and a length (P2).
7. The view system according to claim 2, wherein the partial portion has a geometry which is rotated and/or distorted and/or scaled with respect to the reference geometry, if the partial portion is arranged distally to the position which corresponds to the depiction of the center of distortion of the image capture unit on the image sensor.
8. The view system according to claim 7, wherein the degree of rotation and/or distortion and/or scaling of the reference geometry increases at least partially with increasing distance to the depiction of the center of distortion of the image capture unit on the image sensor.
9. The view system according to claim 7, wherein the degree of rotation and/or distortion and/or scaling of the reference geometry decreases area by area with increasing distance to the depiction of the center of distortion of the image capture unit on the image sensor.
10. The view system according to claim 2, wherein the depiction angle (α, α1, α2), which corresponds to the reference partial portion, and the depiction angle (α′, α1′, α2′), which corresponds to the partial portion, are equal.
11. The view system according to claim 1, wherein the determination of a first dimension of the geometry of the partial portion occurs in a first spatial direction.
12. The view system according to claim 11, wherein the determination of a second dimension of the geometry of the partial portion occur in a second spatial direction.
13. The view system according to claim 12, wherein the first and the second spatial direction run perpendicular to each other.
14. The view system according to claim 1, wherein the compensation curve (K) corresponds to a non-linear curve and/or at least one mathematical function.
15. The view system according to claim 1, wherein the compensation curve (K) corresponds to a freely defined curve.
16. The view system according to claim 15, wherein the values of the freely defined curve are empirically determined and are stored in the processing unit.
17. The view system according to claim 1, wherein the compensation curve (K) corresponds to the distortion curve of the optical element.
18. The view system according to claim 17, wherein the compensation curve (K) additionally corresponds to a compensation curve of the optical element with a digital distortion correction.
19. The view system according to claim 1, wherein the partial portion includes the field of view of a main mirror and/or a wide angle mirror of a commercial vehicle.
20. A mirror replacement system for a vehicle with a view system, according to claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) In the following, the invention is exemplarily described with reference to the enclosed figures, in which:
(2) FIG. 1 shows a schematic structure of two view systems according to an embodiment of the invention,
(3) FIG. 2a shows a plan view of a commercial vehicle which uses the view system of FIG. 1,
(4) FIG. 2b shows a side view of the commercial vehicle of FIG. 2a, which uses the view system of FIG. 1,
(5) FIG. 2c shows a three-dimensional field of view of a camera,
(6) FIG. 3a shows a plan view of a first and a second depiction angle which are captured by a vehicle camera of the view system according to the invention in accordance with a first embodiment,
(7) FIG. 3b shows an image sensor of the view system according to the invention in accordance with the first embodiment, on which a first partial portion is defined,
(8) FIG. 3c shows a monitor of the view system according to the invention in accordance with the first embodiment, on which the first partial portion is depicted,
(9) FIG. 3d shows an image sensor of the view system according to the invention in accordance with the first embodiment, on which a second partial portion is defined,
(10) FIG. 3e shows the monitor of the view system according to the invention in accordance with the first embodiment, on which the second partial portion is depicted,
(11) FIG. 3f shows a compensation curve of the camera of the view system according to the invention in accordance with the first embodiment,
(12) FIG. 4a shows the image sensor of the view system according to the invention in accordance with the first embodiment, on which the second partial portion is modified,
(13) FIG. 4b shows a monitor of the view system according to the invention in accordance with the first embodiment, on which the modified partial portion is depicted,
(14) FIG. 5a shows a plan view of a first and a second depiction angle which are captured by a vehicle camera of the view system according to the invention in accordance with a second embodiment,
(15) FIG. 5b shows a side view of a first and a section depiction angle which are captured by a vehicle camera of the view system according to the invention in accordance with the second embodiment,
(16) FIG. 5c shows an image sensor of the view system according to the invention in accordance with the second embodiment, on which a first partial portion is defined,
(17) FIG. 5d shows a monitor of the view system according to the invention in accordance with the second embodiment, on which the first partial portion is depicted,
(18) FIG. 5e shows an image sensor of the view system according to the invention in accordance with the second embodiment, on which a second partial portion is defined,
(19) FIG. 5f shows the monitor of the view system according to the invention in accordance with the second embodiment, on which the second partial portion is depicted,
(20) FIG. 5g shows a compensation curve of the image processing unit of the view system according to the invention in accordance with the second embodiment in a first spatial direction,
(21) FIG. 5h shows a compensation curve of the image processing unit of the view system according to the invention in accordance with the second embodiment in a second spatial direction,
(22) FIG. 5i show an image sensor of the view system according to the invention in accordance with a third embodiment on which two partial portions are defined,
(23) FIG. 5j shows a developed view of the grid cylinder of FIG. 2c related to the situation in FIG. 5i,
(24) FIG. 5k shows a monitor of the view system according to the invention in accordance with a third embodiment,
(25) FIG. 6a shows an image sensor of the view system according to the invention with a fourth embodiment,
(26) FIG. 6b shows a developed view of the grid cylinder of FIG. 2c related to the situation in FIG. 6a, and
(27) FIG. 6c shows a monitor of the view system according to the invention in accordance with the fourth embodiment.
DETAILED DESCRIPTION OF THE FIGURES
(28) FIG. 1 shows a schematic structure of two view systems 100A, 100B according to an embodiment of the present invention, which, for example, form a view system 100A for the left vehicle side and a view system 100B for the right vehicle side. Each of the view systems 100A, 100B has a capture unit 10A, 10B, a processing unit 20A, 20B and a reproduction unit 30A, 30B. Thus, each of the view systems 100A, 100B corresponds to an indirect view system such as a camera monitor system, and, thus, a mirror replacement system with which the environment around a vehicle may be indirectly viewed.
(29) Each of the capture units 10A, 10B is adapted to capture/record images of an environment around a vehicle, in particular, a commercial vehicle, in the form image data. In this respect, the capture unit 10A, 10B is attached to the vehicle in a suitable manner. The capture unit 10A, 10B may be a camera, in particular, a camera with a sensor according to a CMOS or CCD technology, or any other image sensor which is suitable to capture moved images. A plurality of capture units 10A, 10B may be provided per view system 10A, 10B. The capture unit 10A, 10B communicates with a respective one of the processing units 20A, 20B, such as via connection wires or radio communication.
(30) The corresponding one of the processing units 20A, 20B is adapted to process the image data captured by the capture unit 10A, 10B. In this respect, the processing unit 20A, 20B uses predetermined parameters, such as the resolution, the contrast, the color saturation, the color temperature and the color tones, the exposure time, etc., and changes these or other parameters, in particular for the purpose of optimization of the image depicted on the reproduction unit 30A, 30B.
(31) The corresponding one of the reproduction units 30A, 30B is adapted for showing/reproducing images which are captured by the corresponding one of the capture units 10A, 10B and which are processed by the corresponding one of the processing units 20A, 20B. The reproduction unit 30A, 30B may be a monitor, such as a LCD, TFT or LED monitor. A plurality of reproduction units 30A, 30B may be provided per view system 100A, 100B. The reproduction units 30A, 30B are preferably installed inside a driver's cabin of a vehicle, further preferably on one or both A-columns of a vehicle, such that they may be viewed in an unhindered manner by a driver during driving.
(32) FIG. 2a shows a plan view of a commercial vehicle 1. Presently, the commercial vehicle 1 is a truck/heavy goods vehicle (HGV) which has a tractor and a semi-trailer or a trailer. At each of the front left and right side of the driver's cabin of the tractor in driving direction of the truck, a camera 10A, 10B is attached. As shown in FIG. 2a, the camera 10A captures a portion of the environment, which lies on the right adjacent to the rear part of the tractor and adjacent to the trailer and the semi-trailer, respectively, as viewed in the driving direction of the truck. Even if not shown in FIG. 2a, the camera 10B is configured in accordance with the camera 10A to monitor a portion of the environment, which lies on the left adjacent to the rear part of the tractor and adjacent to the trailer and the semi-trailer, respectively, as viewed in the driving direction of the truck. Each of the right and the left portion of the environment, is captured by the respective camera 10A, 10B with a first angle of view γ1, wherein in FIG. 2a the projection γ1* of the angle of view can be seen which spans horizontally, i.e., approximately parallel, to a plan road surface.
(33) On the right adjacent to the trailer of the truck, four objects 40, 50, 60, 70 are located. The objects 40, 50, 60, 70 lie within the projection γ1′ of the first angle of sight of the camera 10A and, thus, are captured by the camera 10A. The objects 40, 50, 60, 70, for example, may be an obstacle in the form of an item, such as a further vehicle or a pillar or a person. Each of the objects 40 and 50 and the objects 60 and 70, respectively, lies on a circular arc whose center is located at the position of the camera 10A. In other words, the objects 40 and 50 and the objects 60 and 70, respectively, have the same distance (radius) to the camera. The objects 40 and 60, in doing so, lie closer to the vehicle 1 than the objects 50 and 70.
(34) In the driving situation shown in FIG. 2a, the objects 40 and 60 are located in the projection α1* of a first depiction angle α1. The first depiction angle α1 corresponds to an angle which is smaller than the first angle of view γ1 of the camera 10A, lies within the first angle of view γ1 and whose image data (from the entire image data of the capture angle γ1) are shown to the driver on a reproduction unit (not shown in FIG. 2a).
(35) FIG. 2b shows a side view of the truck of FIG. 2a. As shown in FIG. 2b, the capture region of the camera 10B (as well as the capture region of the camera 10A) extends not only horizontally to the rear, but also obliquely downwards to the rear, such that FIG. 2b shows a projection γ2* of a second angle of view γ2 of each of the cameras 10A, 10B. The second angle of view γ2 extends perpendicular to the plane in which the first angle of view γ1 of the camera 10A is located, and expands rearwards. By the first and second angles of view γ1 and γ2, each of the cameras 10A, 10B spans a respective capture cone, which expands starting from the respective camera 10A, 10B downwards to the rear.
(36) As further shown in FIG. 2b, the objects 40, 50, 60 and 70 (objects 60 and 70 are not shown in FIG. 2b) lie within the second angle of view γ2 of the camera 10A and, specifically, are located in a second depiction angle α2, which corresponds to an angle which is smaller than the second angle of view γ2 of the camera 10A, lies within the second angle of view γ2 and whose image data (from the entire image data of the angle of view γ2) are shown to the driver on a reproduction unit (not shown in FIG. 2b).
(37) As can be taken from FIGS. 2a and 2b, the objects 40, 50, 60 and 70 have, as an example, an approximately cylindrical shape whose longitudinal axis runs in a direction perpendicular to the road surface, and are assumed to have the same size for sake of explanation.
(38) In FIG. 2c, a three-dimensional field of view of a camera, such as the camera 10A or 10B of FIGS. 1a and 1b, respectively, is exemplarily shown in an earth axes system (coordinate system X-Y-Z). The camera captures a three-dimensional field of view which is limited by the edges of the field of view Sk1, Sk2, Sk3 and Sk4. The optical axis of the camera runs obliquely downwards, if the road surface FO is assumed as reference surface. Specifically, the optical axis runs in an inclination angle ϕ to the horizontal line HL.sub.O, which runs through the first main point of the optic.
(39) As shown in FIG. 2c, the image data, such as the environment of a vehicle, are captured by the camera in the form of an imaginary cylindrical grid pattern. The cylindrical grid pattern has vertical net lines VL and horizontal net lines HL which run perpendicular to the vertical net lines. The horizontal line HL.sub.O which runs through the main point of the optic has an intersection point with an axis of rotation a.sub.R of the grid pattern which runs in the vertical direction Z through the center of the cylinder cross section.
(40) FIG. 3 shows a plan view of a camera 10 of the view system according to the invention similar to FIG. 2a. In contrast to FIG. 2a, in the scenario of the vehicle environment shown in FIG. 3a, not only the objects 40 and 60 are arranged in a first depiction angle α1 of the camera 10A, but also the objects 50 and 70 are arranged in a first depiction angle α1′ of the camera 10. The first depiction angles α1 and α1′ are approximately of the same size, i.e., they have approximately the same angle extension. The first depiction angles α1 and α1′ lie both within the first angle of view γ1 (not shown) of the camera 10.
(41) FIG. 3b shows an image sensor 11 of the view system according to the invention. The image sensor 11 has a rectangular shape with a longer extension in an up and down direction than in a right and left direction of FIG. 3b. In the vicinity of the left edge of the image sensor 11 shown in FIG. 3b, a first partial portion 12 is defined as taking portion. As shown in FIG. 3b, the partial portion 12 comprises the depiction of the optical axis of the camera 10. The first partial portion 12, has a rectangular shape whose longitudinal extension runs in an up and down direction in FIG. 3b and whose width extension runs in a left and right direction in FIG. 3b. The first partial portion 12 has a width P in the width extension. The first partial portion 12 corresponds to the image data which are located in the depiction angle α1 and may, for example, comprise the field of view of a main mirror and/or a wide angle mirror (e.g., the fields of view II and IV as defined in the ECE R 46). Such a partial portion 12 lying in the vicinity of the depiction of the optical axis in which the distortion is preferably low, as it is located here, e.g. in the vicinity of the optical axis, may be a reference portion.
(42) FIG. 3c shows a monitor 30 (image reproduction unit) of the view system according to the invention. The monitor 30 has a rectangular shape with a longer extension in an up and down direction than in a left and right direction of FIG. 3c. The monitor 30 depicts the first partial portion 12 and the first partial portion 12 is shown on the monitor 30, respectively. In this respect, the depiction surface of the monitor 30 substantially corresponds to the surface of the partial portion 12. On the monitor 30, image data of the vehicle environment are reproduced, which are located in the depiction angle α1 and, thus, the first partial portion 12. In particular, the objects 40 and 60 are reproduced on the monitor 30.
(43) As shown in FIG. 3c, the objects 40 and 60 are reproduced on the monitor 30 in an almost distortion-free manner and only have different sizes in the representation, since there are located in different distances to the camera 10 in the vehicle environment. Specifically, the object 40 is located closer to the camera 10 than the object 60, why it is represented larger on the monitor 30 than the object 60, and the object 60 is located further away from the camera 10 than the object 40, why it is represented smaller on the monitor 30 than the object 40. As the driver recognizes objects in different distances in the vehicle environment also in different sizes (the further away from the driver, the smaller), thus, a realistic reproduction of the vehicle environment occurs for the driver. The object 40 has a length L, which extends in a left and right direction on the monitor 30. The almost distortion-free reproduction of the objects 40 and 60 is constituted in that the objects 40 and 60 are located directly on the center of distortion and in its direct vicinity, respectively, here the optical axis of the camera 10 and the optical element (not shown), respectively, where the distortion behavior of the optical element, which is presently barrel-shaped, is very low. At the position of the first partial portion 12 on the image sensor 12, thus, no compensation of the distortion behavior of the optical element is required and the objects 40 and 60 may be represented on the monitor 30 in an unchanged manner with regard to the compensation of the distortion. The same may also still apply, if the partial portion taken from the image sensor 11 does not comprise the center of distortion, but is directly adjacent thereto and/or contacts the center of distortion. The proportions of the height to the width of the objects 40 and 60 on the monitor 30 approximately correspond to the proportions of the height to the width of the objects 40 and 60 in reality.
(44) FIG. 3d shows again the image sensor 11 of FIG. 3b. In contrast to FIG. 3b, in FIG. 3d a further, second partial portion 12′ is defined next to the right edge of the image sensor 11 which is located in another position on the image sensor 11 than the partial portion 12. The partial portion 12′ does not comprise the depiction of the optical axis of the camera 10, but is rather arranged distally to the optical axis and the center of distortion, respectively. As the first partial portion 12, also the second partial portion 12′ has a rectangular shape whose longitudinal extension runs in an up and down direction in FIG. 3d and whose width extension runs in a left and right direction in FIG. 3d. The second partial portion 12′ has a width P′ in the width direction which is changed in view of the width P of the first partial portion 12. The second partial portion 12′ corresponds to the image data which are located in the depiction angle α1′ and may also comprise the field of view of a main mirror and/or a wide angle mirror (e.g., the fields of view II and IV as defined in the ECE R 46).
(45) FIG. 3e shows again the monitor 30 of FIG. 3c. The monitor 30 now reproduces the second partial portion 12′. In this respect, the depiction surface of the monitor 30 approximately corresponds to the surface of the partial portion 12′. On the monitor 30, the image data of the vehicle environment are reproduced which are located in the depiction angle α1′ and, thus, the first partial portion 12′. In particular, the objects 50 and 70 are reproduced on the monitor 30.
(46) Preferably, only one monitor 30 is provided for each vehicle side. In order to view the different partial portions 12 and 12′, the driver may either manually shift a view which shows the partial portion 12 to a view which shows the partial portion 12′, or the shifting of the views occurs dependent on the driving situation. Alternatively, however, also more than one monitor 30 may be provided for more than one vehicle side. That is, for example, two monitors 30 may be provided, the first monitor 30 showing the partial portion 12 and the second monitor (not shown) showing the partial portion 12′.
(47) As shown in FIG. 3e, the objects 50 and 70 have different sizes, since they are located in different distances to the camera 10 in the vehicle environment. Specifically, the object 50 is located closer to the camera 10 than the object 70, why it is represented larger on the monitor 30 than the object 70, and the object 70 is located further from the camera 10 than the object 50, why it is represented smaller on the monitor than the object 50, which leads to a realistic reproduction of the vehicle environment for the driver. The proportions of the objects 50 and 70, i.e., the width and the height of the objects 50 and 70, respectively, correspond on the monitor 30 in a substantial distortion-free representation, approximately to the proportions of the objects 50 and 70, i.e., the width and the height of the objects 50 and 70, in reality.
(48) The objects 50 and 70 are reproduced on the monitor 30 of FIG. 3e almost distortion-free, i.e. approximately in the same size and shape as the objects 40, 60. The almost distortion-free reproduction of the objects 50 and 70 is constituted in that the partial portion 12′ has a changed geometry in view of the partial portion 12. Specifically, the partial portion 12′ has a smaller width P′ than the width P of the partial portion 12. The reduction of the width P of the partial portion 12, if taken from the image sensor 11 on or next to the depiction of the optical axis as the center of distortion, to the width P′ of the partial portion 12′ serves for compensation of the distortion behavior of the optical element. In other words, the distortion behavior of the optical element may be compensated or at least reduced in that the geometry, e.g., at least a side length of the partial portion 12′ taken from the image sensor 11 is changed dependent on the taking position in view of the geometry of the partial portion 12 which is located on or next to the depiction of the optical axis as the center of distortion on the image sensor or generally in another portion of the image sensor 11. Dependent on the kind of the distortion behavior of the optical element, in this respect, e.g., a reduction or an enlargement of the geometry of the partial portion 12 is required. The width of the object 50 represented on the monitor has a width L′ which extends substantially in a left and right direction on the monitor 30 and the proportions of the object 50 and the object 40 on the monitor 30 are approximately equal to the proportions of the object 50 and the object 40 in reality, in case of a substantially distortion-free representation.
(49) In FIG. 3f, the compensation curve K of the camera 10 and the optical element (not shown) of the camera 10 is shown in a diagram. In the diagram, an angle extension in degree [°] is indicated on the abscissa and the length extension in pixels is indicated on the ordinate. The compensation curve K is a curve which runs through the point of origin (which represents the center of distortion, such as the optical axis) and extends non-linear upwards to the right in FIG. 3f wherein its development with increasing distance to the point of origin slightly flattens. In FIG. 3f, the compensation curve K corresponds to the distortion curve of the optical element. However, it is also conceivable that the compensation curve K corresponds to a freely defined curve which is empirically detected and is stored in the processing unit 20A, 20B. Alternatively, the compensation curve may correspond to at least one mathematical function.
(50) The compensation curve K is used by the image processing unit 20A, 20B for determining the widths P, P′ of the partial portions 12, 12′ on the image sensor 11 dependent on a change in position of the depiction angle α1, α1′ of the camera 10 corresponding to the partial portion 12, 12′. In other words, the image processing unit 20A, 20B may determine by use of the compensation curve K which width P, P′ the partial portion 12, 12′ taken from the image sensor 11 has dependent on the displacement of the corresponding depiction angle α1, α1′ and, thus, dependent on the position on the image sensor 11. The compensation of the vehicle environment shown in FIG. 3f, refers to a compensation in a first spatial direction which presently corresponds to a horizontal spatial direction parallel to the road surface.
(51) In the diagram of FIG. 3f, the angles α1, α1′ of FIG. 3a and the width dimensions P, P′ of the partial portions 12, 12′ of FIGS. 3b and 3d are indicated. In order to compensate a distortion, the geometry of the partial portions 12, 12′, thus, is dependent on where they are taken from the image sensor surface. Therefore, the geometries of the partial portions 12, 12′ differ from each other. As it is shown in FIG. 3f, the width P, P′ of the partial portions 12, 12′, for example, decreases with increasing angle extension, while the angles α1, α1′ stay equal. However, it is also conceivable that the angles α1, α1′ also slightly change dependent on the position of the partial portion 12, 12′, where upon the corresponding width P, P′ of the respective partial portion 12, 12′ changes as well, however, not to the extent as if their angles α1, α1′ do not change.
(52) As it is shown in FIGS. 3c and 3e, the objects 40 and 60 and the objects 50 and 70, respectively, are represented on the monitor 30 in a hardly or merely slightly distorted manner. By the hardly or merely slightly distorted representation of the objects 40 and 60 and 50 and 70, respectively, on the monitor 30 the driver may well recognize the size of the objects 40 and 60 and 50 and 70 as well as their position and orientation in the vehicle environment. Thus, the driver may reliable assess whether the truck collides with one or more with the objects 40, 50, 60 and 70 during the planned driving manoeuver or not.
(53) FIG. 4a shows again the image sensor 11 of FIG. 3b. A partial portion 12″ is taken from the image sensor 11 which substantially corresponds to the partial portion 12′ of FIG. 3d. In particular, the width P″ corresponds to the width P1′ of the partial portions 12′ of FIG. 3d. In FIG. 4a, however, the position of the partial portion 12″ is changed in view of the position of the partial portion 12′ of FIG. 3d. Specifically, the partial portion 12″ in FIG. 4a is tilted and rotated, respectively, to the right in view of the partial portion 12′, such that the upper edge of the partial portion 12″, which runs in the width direction of the partial portion 12″, has an orientation angle β to the edge of the image sensor 11 which is larger than 90°. In other words, in the partial portion 12″, the longitudinal axis does not extend in an up and down direction as in FIG. 3d, but extends in FIG. 4a in an angular manner to the up and down direction and the boundaries of the partial portion 12′ are not parallel to the ones of the also rectangular image sensor 11.
(54) FIG. 4b shows the monitor 30 of the view system according to the invention which substantially corresponds to the monitor of FIGS. 3c and 3e and reproduces the partial portion 12″. As it is shown in FIG. 4b, the objects 50 and 70 which are located in the partial portion 12′ are represented further and stronger non-distorted and distortion-free, respectively, such that their longitudinal axes in an up and down direction in FIG. 4b substantially run parallel to the longitudinal edge of the monitor 30. Thus, the width of the objects 50 and 70, respectively, run on the monitor 30 of FIG. 4b in a left and right direction, such that the arrangement angle β′ runs approximately perpendicular to the longitudinal edge of the monitor 30. The object 50 has a width L″ which extends in a left and right direction on the monitor 30.
(55) By the additional rotation of the partial portion 12″ in view of the position of the partial portion 12′, the deformation and distortion, respectively, of the objects 50 and 70 may be further reduced and may be entirely eliminated, respectively. By the distortion-free representation of the objects 50 and 60 on the monitor 30, the driver may well recognize the size of the objects 50 and 70 as well as their position and orientation in the vehicle environment. Thus, the driver may assess still more reliably whether the truck collides with the object 50 and/or the object 70 during the planned driving maneuver or not.
(56) In the monitor depictions in FIGS. 3c, 3e and 4b, it applies that each of the ratios of the width dimensions L, L′ and L″ of the objects 40 and 50 substantially corresponds to the ratios of the width dimensions of the objects 40 and 50 in reality.
(57) FIGS. 5a to 5h show a compensation of the distortion behavior of the camera 10 in two spatial directions. A first spatial direction corresponds to a first spatial direction (see the capture direction of the camera 10A of FIG. 2a) and a second spatial direction corresponds to a second spatial direction which is substantially perpendicular to the first spatial direction (see the capture direction of camera 10B of FIG. 2b).
(58) The driving situation shown in FIG. 5a substantially corresponds to the driving situation shown in FIG. 3a and, thus, corresponds to a plan view of a camera 10 which is attached to a vehicle (not shown) (corresponding to the view in FIG. 2a). In contrast to the driving situation shown in FIG. 3a, only the objects 40 and 50 are located in the angle of view γ1 of the camera 10. The object 40 lies next to the center of distortion whereas the object 50 is located in a distance thereto.
(59) FIG. 5b also shows the driving situation of FIG. 5a, however, from another perspective. Specifically, FIG. 5b shows a side view of the camera 10 and, thus, a side view of the objects 40 and 50 (corresponding to the view in FIG. 2b, object 40 not shown). In this respect, the object 40 is located in a depiction angle α2 and the object 50 is located in a depiction angle α2′. The depiction angles α2 and α2′ are smaller than the angle of view γ2 (not shown) and lie within the angle of view γ2.
(60) FIG. 5c shows an image sensor 11 similar to the image sensor of FIG. 3b. On the image sensor 11, the partial portion 12 is defined. The image data of the partial portion 12 correspond to the image data of the depiction angles α1 and α2, each of which comprises the object 40. The partial portion 12 has a width P1 in a left and right direction in FIG. 5c which arises from the image data of the depiction angle α1 in a first spatial direction, and a length P2 in an up and down direction in FIG. 5c which arises from the image data of the depiction angle α2 in the second spatial direction. The partial portion 12 may again be considered as the reference partial portion 12 with a substantial distortion-free depiction.
(61) FIG. 5d shows a monitor which has substantially the same construction as the monitor of FIG. 3c and whose monitor surface substantially corresponds to the partial portion 12, the object 40 shown on the monitor 30 is represented in a substantially undistorted manner, because it is located next to the center of distortion of the camera 10, and has a width L1 and a length L2 whose ratio corresponds approximately to the actual ratio of the width and the height of the object 40 in the vehicle environment.
(62) FIG. 5e shows again the image sensor 11 of FIG. 5c. On the image sensor 11, a partial portion 12′ is defined. The image data of the partial portion 12′ correspond to the image data of the depiction angles α1′ and α2′, each of which comprises the object 50. The partial portion 12′ has a width P1′ in a left and right direction in FIG. 5e and a length P2′ in a up and down direction in FIG. 5e each of which arises from the image data of the two depiction angles α1′ and α2′. Both, the width P1′ and the length P2′ of the partial portion 12′ are changed in view of the width P1 as well as the length P2 of the partial portion 12 of FIG. 5c, because the partial portion 12′ is taken at another position on the image sensor 11 than the partial portion 12. By changing the width and the length, respectively, of the partial portion 12 dependent on its position on the image sensor 11, i.e., an adaptation of the geometry of the partial portion 12 in two spatial directions, the distortion behavior of the camera 10 may be compensated in an improved manner than if a compensation merely occurs in one spatial direction.
(63) FIG. 5f shows a monitor 30 which has substantially the same construction as the monitor of FIG. 3c and whose monitor surface substantially corresponds to the partial portion 12′. The object 50 shown on the monitor 30 is—although distally arranged from the center of distortion of the camera 10—substantially represented in an undistorted manner and has a width L1′ and a length L2′ whose ratio approximately corresponds to the actual ratio of the object 40 in that vehicle environment. The approximately distortion-free depiction of the object 50 may be realized by adapting the geometry of the partial portion 12′ in view of the geometry of the partial portion 12. The extent and the degree of the adaptation depends on the position of the partial portion 12′ and, specifically, depends on the distance of the partial portion 12 from the depiction of the center of distortion on the image sensor 11. By changing of the geometry of the partial portion 12′ dependent from the taking position of the partial portion 12′ on the image sensor 11, the distortion behavior of the camera which is presently pulvinated may be compensated or at least reduced. Even if not shown, a position change of the partial portion 12′ similar to the modification of the first embodiment in FIG. 4a and FIG. 4b, e.g., a rotation, is also conceivable in view of the position of the partial portion 12.
(64) The compensation of the distortion is shown in FIGS. 5g and 5h. In FIGS. 5g and 5h, each of the compensation curves K1 and K2, respectively, of the camera 10 are shown in a diagram. As in the compensation curve K of FIG. 3f, in the diagrams of FIGS. 5g and 5h, an angle extension in degree [°] is indicated on the abscissa and the length extension in pixels is indicated on the ordinate. The compensation curves K1 and K2 are curves which run through the point of origin (which represents the center of distortion) and each of which extends non-linearly upwards to the right in FIGS. 5g and 5h, wherein their progression increases strongly with increasing distance from the point of origin, because the camera 10 has a pulvinated distortion. In FIGS. 5g and 5h, the compensation curves K1 and K2 correspond to the compensation curves of the optical element in the respective spatial direction. The compensation curve K1 shown in FIG. 5g corresponds to the distortion behavior of the camera 10 in the first horizontal spatial direction, whereas the compensation curve K2 shown in FIG. 5h corresponds to the distortion behavior of the camera 10 in the second vertical spatial direction.
(65) The compensation curves K1, K2 are used for determination of the widths P1, P1′ of the partial portion 12, 12′ on the image sensor 11 dependent on a change in the position of the depiction angle α1, α1′, α2, α2′ of the camera 10 corresponding to the partial portion 12, 12′, such that both partial portions 12, 12′ may be shown undistorted or approximately undistorted on the monitor 30. In other words, the image processing unit 20a, 20b may determine by means of the respective compensation curve K1, K2, which width P1, P1′ and which length P2, P2′ the partial portion 12, 12′ taken from the respective image sensor 11 has dependent on the displacement of the corresponding depiction angles α1, α1′ and α2, α2′, respectively, and, thus, dependent on the position of the respective partial portion 12, 12′ on the image sensor 11.
(66) In the diagrams of FIGS. 5g and 5h, the angles α1, α1′ and α2, α2′ of FIGS. 5a and 5b as well as the width dimensions P1, P1′ and the length dimensions P2, P2′ of the partial portions 12, 12′ of FIGS. 5c and 5e are indicated. As shown in FIG. 5g, the width P1, P1′ of the partial portion 12, 12′ increases in the first spatial direction with increasing angle extension, whereas the angles α1, α1′ stay equal. As shown in FIG. 5h, the width P2, P2′ of the partial portion 12, 12′ increases in the second spatial direction also with increasing angle extension whereas the angle α2, α2′ stays equal. As already explained with reference to FIG. 3f, it is, however, also conceivable that the angles α1, α1′ and/or α2, α2′ also slightly change depending on the position of the respective partial portion 12, 12′, whereby the respective width P1, P1′ and/or P2, P2′ of the respective partial portion 12, 12′ also changes, however, not to the extent as if the angles α1, α1′ and/or α2, α2′ are not subjected to a change.
(67) As shown in FIGS. 5d and 5f, the object 40 is represented on the monitor 30 in a non-distorted manner and the object 50 is represented on the monitor 30 in a slightly distorted manner. By the hardly or only slightly distorted representation of the objects 40 and 50 on the monitor 30, the driver may well recognize the size of the objects 40 and 50 as well as their position and orientation in the vehicle environment. Thus, the driver may reliable assess whether the truck collides with the object 40 and/or the object 50 during the planned driving manoeuver or not. In this respect, a compensation of the distortion behavior of the camera 10 by the image processing unit 20A, 20B in two spatial directions promotes a further distortion-free representation of objects in comparison to a compensation in merely one spatial direction. Thus, compensations in more than two spatial directions are also conceivable, in order to allow a further distortion-free depiction of objects.
(68) In FIG. 5i, an image sensor 11 is shown, on which a partial portion 12 and a partial portion 12″ are defined. A partial portion 12″ corresponds to a partial portion, which is rotated with respect to the partial portion 12′ of FIG. 5e. That is, the partial portion 12″ corresponds to a partial portion which is already geometrically adapted to the taking position and, in addition, is rotated with respect to the vertical edge of the image sensor. As already described with respect to FIGS. 4a and 4b, a distortion behavior of an optical element may be further reliably compensated by adapting the geometry of a partial portion taken from the image sensor and by rotation of the adapted geometry.
(69) As it is shown in FIG. 5i, the grid cylinder (see FIG. 2c) captured by the camera 10 is depicted on the sensor surface of the image sensor 11. By the distortion behavior of the optic and the fundamental rules of perspective, the vertical lines VL of FIG. 2c run in FIG. 5e not parallel to the vertical edge of the image sensor 11, but are depicted as bent (virtual) lines LV1, LV2, LV3, etc. (generally referred to as LV). Also, the horizontal lines HL of FIG. 2c run in FIG. 5i not parallel to the horizontal edge of the image sensor, but are depicted as bent (virtual) lines LH1, LH2, etc. (generally referred to as LH). The bending of the lines increases from the center of distortion towards the edge of the image sensor. The partial portion 12 lies in a portion of the image sensor in which the edges of the partial portion 12 are merely parallel to the vertical and horizontal lines of the depicted grid pattern. Thus, the partial portion 12 is almost distortion-free depicted on the sensor. The partial portion 12″ lies in a portion of the image sensor, in which the vertical and horizontal lines of the depicted grid pattern are strongly bent.
(70) If the geometry of the partial portion 12 was taken from a portion of the image sensor, whose vertical lines LV1, LV2, LV3 and horizontal lines LH1, LH2 are strongly bent, without adaptation of the geometry, the image data of the partial portion 12 on the monitor would be reproduced in a strongly distorted manner. For compensating the distortion, the geometry of the partial portion 12 (with center M) may, as shown in FIGS. 5a to 5h, be adapted to the taking position on the image sensor, if taken at a position on the image sensor, where the vertical lines LV1, LV2, LV3 and the horizontal lines LH1, LH2 are strongly bent. The distortion may be even better compensated if the geometry is, in addition, adapted to the progression and the bend, respectively, of the horizontal lines LH1, LH2 and the vertical lines LV1, LV2, LV3, i.e., if the geometry is rotated in view of the vertical orientation on the image sensor, such that the longitudinal axis and the lateral axis of the geometry which run through the center M″ of the geometry lie tangentially at a respective bent horizontal line LH and vertical line LV. The rotation angle, with which the partial portion 12″ is rotated in view of the vertical orientation of the sensor edge, is referred to as β.
(71) The determination of the rotation angle β occurs by means of a vector field. In this respect, for each point of the sensor surface (e.g., each pixel), an associated angle is determined and stored in a database. That is, for each sensor pixel, based on the depiction function of the optical element with a distortion curve and the inclination angle of the camera (see FIG. 2c), a coordinate is associated to the cylinder grid pattern, which is depicted on the image sensor 11. In this respect, it is not required that each pixel has a direct association. The values between the stored points may, for example, be determined by interpolation. The determination of the angle may occur empirically or by means of calculation. It is sufficient to determine the position of a point of a partial portion by means of its position on the sensor. The point may, for example, be a center M″ of the partial portion 12″ which lies in the center of the partial portion 12″. Dependent on the position of the center M″ in the partial portion 12″ on the image sensor, the control unit 20A, 20B reads out a rotation angle from the database and rotates the partial portion 12′ around the read out angle.
(72) For ease of comprehension, in FIG. 5j, a projection of the grid cylinder of FIG. 2c related to the situation of FIG. 5i is shown. The points Se1, Se2, Se3 and Se4 mark corner points of a projected intersection of the field of view of the camera and correspond to corner points of the image sensor Se1, Se2, Se3 and Se4. As can be taken from FIG. 5j, the partial portion 12 has a height H and a width W, while the partial portion 12″ has a height H″ and a width W″ which are approximately equal (H˜H″, W˜W″).
(73) The transmission of the rotated partial portion 12″ from the image sensor 11 to the monitor 30 may occur by means of a transformation matrix based on the changed geometry and the rotation. In this respect, the points Se1, Se2, Se3 and Se4 correspond to the points ME1 to ME4 on the monitor. The partial portion 12″, thus, is depicted on the monitor 30 in its entirety. As shown in FIG. 5k, the objects 40 and 60 are depicted on the image sensor almost distortion-free due to the changed geometry and the rotation of the partial portion 12″. The object 40 has, for example, a width L″ on the monitor 30 which corresponds to the ratio of the width L of the object 40 in the vehicle environment.
(74) FIGS. 6a to 6c show a further approach how the geometry of the partial portion 12, 12′, 12″ is adapted dependent on the position on the image sensor 11 such that a distortion behavior of an optical element may be compensated as far as possible. As shown in FIG. 6a, three partial portions 12, 12′ and 12″ are defined on the image sensor. All three partial portions 12, 12′ and 12″ are subjected to a matrix transformation, i.e., a perspective transformation dependent on their respective position on the image sensor 11.
(75) In contrast to the method of FIGS. 5i to 5k, which is the basis in FIGS. 6a to 6c, however, during the determination of the geometry of the partial portions 12, 12′ and 12″ at certain positions, not the rotation angle is determined, but characteristic points of the partial portions 12, 12′ and 12″ such as the corner points of the partial portion 12, at which the side edges of each of the partial portions 12, 12′, 12″ intersect each other, are used for determination of the geometry. With respect to FIG. 6a and FIG. 6c, the points E1 to E4 are used for the partial portion 12, the points E1′ to E4′ are used for the partial portion 12′ and the points E1″ to E4″ are used for the partial portion 12″, respectively, as input data for a matrix transformation, while the corner points of the monitor ME1 to ME4 form the target data for the matrix transformation.
(76) FIG. 6b shows again a projection of the grid cylinder of FIG. 2c. Points Set, Se2, Se3 and Se4 mark again corner points of a projected intersection of the field of view of a camera and correspond to the corner points of the image sensor Set, Se2, Se3 and Se4. As can be taken from FIG. 6b, the partial portion 12 has a height H and a width W, while the partial portion 12′ has a height H′ and a width W′, and the partial portion 12″ has a height H″ and a width W″ which are each approximately equal (H˜H′ H″, W˜W′˜W″).
(77) The transmission of the partial portions 12, 12′, 12″ from the image sensor 11 to the monitor 30 occurs by means of a transformation matrix. The corner points ME1 to ME4, in this respect, form target data/target points of the partial portion 12′, which correspond to the corner points E1′ to E4′. Thereby, as shown in FIG. 6c, the object 40 has, for example, a width L′ on the monitor 30 which corresponds to the ratio of the width L of the object 40 in the vehicle environment. In general, the shape of the partial portions 12, 12′, 12″ on the image sensor 11 and correspondingly on the monitor 30 is dependent on the distortion behavior of the optical element.
(78) A distortion behavior of an optical element, thus, may be compensated in a more reliable manner by adapting the geometry and an additional rotation of the geometry dependent on the position of the geometry on the image sensor than by simply adapting the geometry. The invention should be understood such that for compensation of a distortion behavior of an optical element, an exclusive rotation of the geometry may also occur.
(79) Generally, the representation of the environment around a vehicle, i.e., the representation of the image data from the depiction angles and, thus, the partial portions, on a single monitor with only a single monitor portion may occur temporarily subsequently or on a single monitor with separated monitor portions simultaneously. The separated monitor portions may be arranged adjacent or on top of each other in a left and right direction of the monitor and may include further fields of view of a main mirror and/or a wide angle mirror of a commercial vehicle, such as defined in the ECE R 46, e.g., the field of view of a main mirror in an upper monitor portion and the field of view of a wide angle mirror in a lower monitor portion. It is also conceivable that all fields of view are represented in a monitor with a single monitor portion. This requires spatially adjacent or at least closely located partial portions and a continuous or sectional compensation of the distortion behavior of the optical element over the image sensor surface in which the partial portions are located, such that a continuous complete image of the vehicle environment (without interruption) is generated which allows the driver a quick assessment of the vehicle environment. Finally, the partial portions may be represented at different times, e.g., depending on the driving situation or inputs of the driver. Finally, a representation of different partial portions on a plurality of monitors is also conceivable.
(80) It is explicitly stated that all features disclosed in the description and/or the claims are intended to be disclosed separately and independently from each other for the purpose of original disclosure as well as for the purpose of restricting the claimed invention independent of the composition of the features in the embodiments and/or the claims. It is explicitly stated that all value ranges or indications of groups of entities disclose every possible intermediate value or intermediate entity for the purpose of original disclosure as well as for the purpose of restricting the claimed invention, in particular as limits of value ranges.
(81) Thus, while there have shown and described and pointed out fundamental novel features of the invention as applied to a preferred embodiment thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices illustrated, and in their operation, may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or embodiment of the invention may be incorporated in any other disclosed or described or suggested form or embodiment as a general matter of design choice. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto.