Viewing system and method for displaying an environment of a vehicle
09865169 ยท 2018-01-09
Assignee
Inventors
- Bernd Gassmann (Straubenhardt-Langenalb, DE)
- Kay-Ulrich Scholl (Karlsbad, DE)
- Johannes QUAST (Karlsruhe, DE)
Cpc classification
International classification
Abstract
Displaying an environment of a vehicle comprising recording an image of an adjacent exterior environment of the vehicle in the form of image data with the aid of a plurality of image capture units and displaying an output image with the aid of a display device. A virtual, three-dimensional space and a surface are determined with the aid of an arithmetic unit the virtual, three-dimensional space being at least partially delimited by the surface. A projection of the image data onto the surface is calculated with the aid of the arithmetic unit. A virtual vehicle object from predetermined data is calculated as a computer-generated graphic in the virtual, three-dimensional space with the aid of the arithmetic unit. The output image is generated with the aid of the arithmetic unit by rendering a viewing volume which includes the virtual vehicle object, the viewing volume being delimited by the surface, and the viewing volume being based on a viewing position and a viewing angle and a zoom factor in the virtual, three-dimensional space.
Claims
1. A viewing system of a vehicle, comprising: a plurality of cameras that each records an image of an exterior environment of the vehicle and provides captured image data indicative thereof; a display that displays an output image; and a processor that receives the captured image data, wherein the processor is configured to: determine a virtual, three-dimensional space and a surface that at least partially delimits the virtual, three-dimensional space; calculate a single projection of the captured image data directly onto the surface; calculate a virtual vehicle object as a computer-generated graphic from predetermined data in the virtual, three-dimensional space; and generate the output image by rendering a first viewing volume which includes the virtual vehicle object, the first viewing volume being delimited by the surface, the first viewing volume being based on a first viewing position and a first viewing angle and a first zoom factor in the virtual, three-dimensional space and by rendering a second viewing volume different from the first viewing volume which includes a frontmost part of the virtual vehicle object and a horizontal region of the surface in front of the virtual vehicle object, the second viewing volume being based on a second viewing position different from the first viewing position and a second viewing angle different from the first viewing angle and a second zoom factor different from the first zoom factor in the virtual, three-dimensional space, wherein the processor is configured to adapt a shape of the surface as a function of a measured variable.
2. The viewing system of claim 1, wherein, to transition between the first viewing volume and the second viewing volume, the processor is configured to at least one of: change the first viewing position from a first coordinate in the virtual, three-dimensional space to a second coordinate in the virtual, three-dimensional space; change the first viewing angle from a first direction in the virtual, three-dimensional space to a second direction in the virtual, three-dimensional space; and change the first zoom factor from a first zoom value to a second zoom value.
3. The viewing system of claim 2, wherein at least one predetermined viewing volume is associated with each region of the vehicle, the predetermined viewing volume having a part of the vehicle object associated with the region of the vehicle and a surface region of the surface adjacent to the associated part, wherein the processor is configured to control the predetermined viewing volume associated with the region based on an exceeding of a threshold by a collision probability (CP) in order to render the part of the vehicle object associated with the region and the surface region of the surface adjacent to the associated part.
4. The viewing system of claim 1, wherein the processor is configured to determine a collision probability (CP) of a collision between a region of the vehicle and an object in the environment based on at least one measured distance to the object in the environment.
5. The viewing system of claim 4, wherein the processor is configured to determine the collision probability (CP) of the collision between the region of the vehicle and the object in the environment based on at least one signal which is associated with a movement of the vehicle in the environment.
6. The viewing system of claim 5, wherein the at least one signal is associated with at least one of: a gear selection, a speed of the vehicle, an acceleration of the vehicle, a change in position, a sensed brake pedal position, a sensed accelerator pedal position, and a set turn indicator.
7. The viewing system of claim 5, wherein the at least one signal is associated with at least two of: a gear selection, a speed of the vehicle, an acceleration of the vehicle, a change in position, a sensed brake pedal position, a sensed accelerator pedal position, and a set turn indicator.
8. The viewing system of claim 5, wherein the at least one signal is associated with a gear selection, a speed of the vehicle, an acceleration of the vehicle, a change in position, a sensed brake pedal position, a sensed accelerator pedal position, and a set turn indicator.
9. The viewing system of claim 1, wherein the measured variable is a speed of the vehicle.
10. A method for displaying an environment of a vehicle, comprising: recording an image of an environment adjacent to the vehicle in the form of captured image data with the aid of a plurality of cameras; displaying an output image on a display device; determining a virtual, three-dimensional space and a surface with the aid of a processor, the virtual, three-dimensional space being at least partially delimited by the surface; calculating a single projection of the captured image data directly onto the surface with the aid of the processor; calculating a virtual vehicle object from predetermined data as a computer-generated graphic in the virtual, three-dimensional space with the aid of the processor; generating the output image with the aid of the processor by rendering a first viewing volume which includes the virtual vehicle object, the first viewing volume being delimited by the surface, and the first viewing volume being based on a first viewing position and a first viewing angle and a first zoom factor in the virtual, three-dimensional space and by rendering a second viewing volume which includes a frontmost part of the virtual vehicle object and a horizontal region of the surface in front of the virtual vehicle object, the second viewing volume being different from the first viewing volume and based on a second viewing position, different from the first viewing position, and a second viewing angle, different from the first viewing angle, and a second zoom factor, different from the first zoom factor, in the virtual, three-dimensional space; and adapting a shape of the surface as a function of a measured variable.
11. The method of claim 10, wherein the measured variable is a speed of the vehicle.
12. A viewing system of a vehicle, comprising: a plurality of cameras that each records an image of an exterior environment of the vehicle and provides captured image data indicative thereof; a display that displays an output image; and a processor that receives the captured image data, wherein the processor is configured to: determine a virtual, three-dimensional space and a surface that at least partially delimits the virtual, three-dimensional space; calculate a single projection of the captured image data directly onto the surface; calculate a virtual vehicle object as a computer-generated graphic from predetermined data in the virtual, three-dimensional space; and generate the output image by rendering a first viewing volume which includes the virtual vehicle object, the first viewing volume being delimited by the surface, the first viewing volume being based on a first viewing position and a first viewing angle and a first zoom factor in the virtual, three-dimensional space and by rendering a second viewing volume which includes a frontmost part of the virtual vehicle object and a horizontal region of the surface in front of the virtual vehicle object, the second viewing volume being based on a second viewing position and a second viewing angle and a second zoom factor in the virtual, three-dimensional space, wherein the processor is further configured to adapt a shape of the surface as a function of a user input.
13. The viewing system of claim 12, wherein the surface is in the shape of a bowl that has a base and a wall, the base of the bowl shape being designed to be planar, the wall of the bowl shape being designed to be curved.
14. A method for displaying an environment of a vehicle, comprising: recording an image of an environment adjacent to the vehicle in the form of captured image data with the aid of a plurality of cameras; displaying an output image on a display device; determining a virtual, three-dimensional space and a surface with the aid of a processor, the virtual, three-dimensional space being at least partially delimited by the surface; calculating a single projection of the captured image data directly onto the surface with the aid of the processor; calculating a virtual vehicle object from predetermined data as a computer-generated graphic in the virtual, three-dimensional space with the aid of the processor; generating the output image with the aid of the processor by rendering a first viewing volume which includes the virtual vehicle object, the first viewing volume being delimited by the surface, and the first viewing volume being based on a first viewing position and a first viewing angle and a first zoom factor in the virtual, three-dimensional space and by rendering a second viewing volume which includes a frontmost part of the virtual vehicle object and a horizontal region of the surface in front of the virtual vehicle object, the second viewing volume being based on a second viewing position and a second viewing angle and a second zoom factor in the virtual, three-dimensional space; and adapting a shape of the surface as a function of a user input.
15. The method of claim 14, wherein the surface is in the shape of a bowl that has a base and a wall, the base of the bowl shape being designed to be planar, the wall of the bowl shape being designed to be curved.
Description
DESCRIPTION OF THE DRAWINGS
(1) The invention is explained in greater detail below on the basis of exemplary embodiments illustrated in the drawings, where
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION OF THE INVENTION
(9)
(10) The cameras 110, 120, 140, 160 are directed toward the outside in the vehicle 100, so that the cameras 110, 120, 140, 160 record an image of the environment 900. In the embodiment illustrated in
(11) Each of the distance sensors 410, 420, 430, 440, 450, 460 is designed to record distance. The distance sensor 410, 420, 430, 450, 460 measures a distance d.sub.1, d.sub.2, d.sub.3, d.sub.5, d.sub.6, respectively between the vehicle 100 and one of objects 910, 920, 930, 956 in a contactless manner, for example capacitively or using ultrasound. The distance sensor 440 also measures a distance d.sub.4, which is not shown in the embodiment of
(12) The cameras 110, 120, 140, 160 and the distance sensors 410, 420, 430, 440, 450, 460 are connected to a signal and/or data processor 200, which is configured to evaluate the signals of the cameras 110, 120, 140, 160 and the distance sensors 410, 420, 430, 440, 450, 460. In the exemplary embodiment in
(13) The system illustrated in
(14) The generation of the output image 330 is illustrated schematically in
(15) The vehicle object 500 is positioned in a virtual, three-dimensional space 600. The virtual, three-dimensional space 600 is delimited by a surface 690. In the exemplary embodiment in
(16) A projection of image data onto surface 690 is calculated, the image data being recorded with the aid of the cameras 110, 120, 140, 160. The recorded image data is projected onto the inside of the bowl in
(17) The output image 330 is generated based on a viewing volume 711, 712. The viewing volume 711, 721 is delimited in the three-dimensional space 600 by the surface 690. The viewing volume 711, 721 is based on a viewing position 710, 720 and a viewing angle 712, 722 and a zoom factor 714, 724 in the virtual, three-dimensional space 600. The viewing position 710, 720 must be assigned to a region enclosed by the surface 690. The zoom factor 714, 724 may be fixed or adjustable. In the embodiment in
(18) The first viewing volume 721 is defined by a so-called first clipping plane 723, by first zoom factor 724, by first viewing angle 722 and by the delimitation by surface 690. Based on the curvature of the surface 690 the first viewing volume 721 deviates from a truncated pyramid shape. The first zoom factor 724 is displayed in simplified form by an opening angle. The vehicle object 500 is also included in the first viewing volume 721, so that a region of the surface 690 is not visible behind the vehicle object 500 in the output image 330, seen from the first viewing position. A hidden surface determination is therefore carried out. Only parts of the vehicle object 500 are visible, parts 503 and 502 in the case of the first viewing position 720. The output image 330 is generated by rendering the first viewing volume 721.
(19) The embodiment in
(20) Although it is possible to switch between the first viewing volume 721 and the second viewing volume 711 in the simplest case, a smooth variation from the first viewing volume to the second viewing volume improves orientation. The viewing position is advantageously changed continuously from the first viewing position 720 to the second viewing position 710 along a trajectory 790. A change in the viewing angle 722, 712 may also be adapted. Exemplary embodiments of such changes are explained by way of example in
(21) An embodiment for trajectories 791, 792, 793, 797, 798 is illustrated in
(22) The vehicle object 500 in
(23) In addition, a viewing position 740 from a bird's eye perspective, which has an associated trajectory 793, and a viewing position 730 between lowermost viewing position 720 and viewing position 740 from a bird's eye perspective, are each shown at different heights. The change in height of the viewing position 720, 730, 740 is also illustrated by a trajectory, for example a circular trajectory 797, 798. If a danger of collision is ascertained with the aid of the sensors 410, 420, 430, 440, 450, 460 according to
(24)
(25) An output image 330 is illustrated schematically in
(26) The output image 330 for the second viewing volume 711 from
(27)
(28) In the embodiment in
(29)
(30) An environment sensor system is evaluated in a second method step 2. For example, the environment sensor system according to
(31) In the third method step 3, data of the status sensor system is interpreted, and the direction (gear/steering wheel motion/turn signals) in which vehicle 100 is expected to move and at what speed (accelerator pedal/brake pressure) are ascertained. On the other hand, the environment sensor system is interpreted in the fourth method step 4; for example, an approach toward or moving away from an object 910, 920, 930, 956 and/or its direction of movement in the environment 900 may be determined.
(32) A predetermined viewing volume 721, 711, which has a viewing position 710, 720 and/or a viewing angle 712, 722, is ascertained in the fifth method step 5. The goal is to generate the output image 330 by rendering the predetermined viewing volume 711, 721. The predetermined viewing volume 711, 721 has a virtual vehicle object 500. The predetermined viewing volume 711, 721 is also delimited by a surface 690 in the direction of viewing angle 712, 722. A projection of image data RAW of an environment of the vehicle onto the surface 690 is calculated. The predetermined viewing volume 711, 721 is based on the viewing position 710, 720 and the viewing angle 712, 722 and on a fixed or variable zoom factor 714, 724 in virtual, three-dimensional space 600, as illustrated, for example, in
(33) A decision about which viewing position 710, 720 to be approached is made in fifth method step 5. To make the decision, for example, a threshold comparison is carried out in the exemplary embodiment in
CP=f(d.sub.1 . . . d.sub.6,S.sub.1 . . . S.sub.5)(1)
is determined, for example, as a vector. For example, each element of the vector is associated with one probability of a collision of a specific vehicle region 101, 102, 103, 104, 105, 106, as illustrated schematically in
(34) In the embodiment in
(35) For example, the viewing position, e.g., 710 according to
(36) The viewing positions, corresponding to 710, 720, etc. from
(37) For example, if it is determined in fifth step 5 that the danger and thus the probability of a collision is high in both the rear region (corresponding to 104 from
(38) A change in the viewing volume is controlled in sixth step 6. The viewing position is changed between a first coordinate in the virtual, three-dimensional space and a second coordinate in the virtual, three-dimensional space. The viewing position is changed in a movement along a trajectory between the first coordinate and the second coordinate. For example, the trajectory has an elliptical shape, the vehicle object being positioned in the center of the ellipsis. The viewing position may also be changed between a first direction in the virtual, three-dimensional space and a second direction in the virtual, three-dimensional space. For example, a zoom factor may be permanently set.
(39) A transition between two viewing volumes is regulated with the aid of a logic, so that the viewing position does not continuously swing back and forth. For example, a change to a viewing position for a viewing volume having a detailed view with the aid of the logic is permitted only starting from the corresponding viewing position in the center; otherwise, the viewing position in the center is controlled first, and the viewing position for the viewing volume having the detailed view is controlled only thereafter.
(40) A change from the viewing position in the center to the viewing position for the viewing volume having the detailed view and/or a change from the viewing position in the center to the viewing position from a bird's eye perspective is controlled with the aid of the logic via a time delay. Although a temporary danger of collision is detected by the environment sensor system, it does not lead to a change in the viewing position, due to the delay, since the control of the change in the viewing position before the end of the delay has already been overwritten by a counter-control.
(41) The change in viewing position is animated in seventh step 7. For example, the movement along the trajectory is first accelerated, starting from the first viewing position, and braked before reaching the second viewing position. This enables the viewer of the output image to orient himself in the change in perspective, so that the viewer is able to capture the new view and a possible danger of collision.
(42) The invention is not limited to the embodiment variants illustrated in
(43) Although the present invention has been illustrated and described with respect to several preferred embodiments thereof, various changes, omissions and additions to the form and detail thereof, may be made therein, without departing from the spirit and scope of the invention.