Method for assisting a user of an assistance system, assistance system and vehicle comprising such a system
20220001889 · 2022-01-06
Assignee
Inventors
Cpc classification
B60W50/14
PERFORMING OPERATIONS; TRANSPORTING
B60W2554/40
PERFORMING OPERATIONS; TRANSPORTING
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60W50/14
PERFORMING OPERATIONS; TRANSPORTING
Abstract
The invention regards an assistance system and a method for assisting a user of such an assistance system including a display displaying representations of one or more objects in the environment of the display. In a first step, information on an environment of the display, is determined and presence of objects in the environment is determined. Then, a representation of the environment including representations of the objects determined in the environment is generated. For at least one of the determined and displayed objects, a starting point for an indicator line is calculated from a direction of the object relative to the display, and, for each determined starting point, an indicator line connecting the starting point with the displayed representation of the real world object is drawn.
Claims
1. Method for assisting a user of an assistance system including a display displaying representations of one or more objects in the environment of the display, comprising the following method steps: obtaining information on an environment of the display determining presence of objects in the environment, displaying a representation of the environment including the objects determined in the environment determining, for at least one of the determined and displayed objects, a starting point for an indicator line from a direction of the object relative to the display, and displaying, for each determined starting point, the indicator line connecting the starting point with the displayed representation of the real world object.
2. Method according to claim 1, wherein the starting point is calculated as a point of intersection between an outer edge of the display and a surface area extending from a vertical axis y through a reference point of the display and including a direction vector pointing from the reference point towards the object for which the indicator line shall be displayed.
3. Method according to claim 2, wherein the direction vector is calculated as pointing from the reference point in the display to a center of the determined object.
4. Method according to claim 1, wherein for at least one determined and displayed object an indicator bar corresponding to a scaled horizontal extension of the determined object is displayed along the outer edge of the display, wherein the indicator bar includes the starting point.
5. Method according to claim 2, wherein for at least one determined and displayed object an indicator bar corresponding to a scaled horizontal extension of the determined object is displayed along the outer edge of the display, wherein the indicator bar includes the starting point.
6. Method according to claim 3, wherein for at least one determined and displayed object an indicator bar corresponding to a scaled horizontal extension of the determined object is displayed along the outer edge of the display, wherein the indicator bar includes the starting point.
7. Method according to claim 4, wherein the indicator bar extends from an intersection point of the edge of the display and a first boundary surface area to an intersection point of the edge of the display and a second boundary surface area, wherein the first boundary surface area extends from the vertical axis y through the reference point and includes a left direction vector pointing from the reference point towards a first outermost perceivable boundary of the determined object in the horizontal direction and the second boundary surface area extends from the vertical axis through the reference point and includes a direction vector pointing from the reference point towards an opposite, second outermost perceivable boundary of the determined object in the horizontal direction.
8. The method according to claim 7, wherein, in case of coincidence of the first boundary surface area of a first determined object and the second boundary surface area of a second determined object, directly adjacent indicator bars are displayed using distinguishable characteristics.
9. Assistance system comprising a display, controlled by a processing unit configured to obtain information on an environment of the display, determine presence of objects in the environment and to cause the display to display a representation of the environment including the objects determined in the environment, and to determine for at least one of the determined and displayed objects, a starting point for an indicator line from a direction of the object relative to the display, and to cause the display to display, for each determined starting point, the indicator line connecting the starting point with the displayed representation of the object.
10. Assistance system according to claim 9, wherein the processing is further configured to cause the display to display for at least one determined and displayed object an indicator bar corresponding to a perceivable horizontal extension of the determined object along the outer edge of the display, wherein the indicator bar includes the starting point.
11. Vehicle comprising the assistance system according to claim 9.
12. Vehicle comprising the assistance system according to claim 10.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] Further aspects and details will now be explained with reference to the annexed drawings in which
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
DETAILED DESCRIPTION
[0025] In
[0026] For simplicity of the drawings only the display 1 of the vehicle is shown, because vehicles comprising driver assistance systems including such displays are readily known in the art.
[0027] In the situation illustrated in
[0028] On the right neighboring lane, a vehicle 3 can be identified and further a lorry 5 following the vehicle 3. Additionally, on the ego-lane, a further vehicle 4 is driving as a predecessor. Objects that are sensed by the sensors and determined from the sensor signals by postprocessing all the sensor signals are, of course, not limited to vehicles. There are many systems on the market available, which are capable of identifying a plurality of different objects from sensor signals. Sensors may be for example radar sensors, LIDAR sensors, ultrasonic sensors, etc. So identification of objects in signals received from respective sensors is known in the art and an explanation thereof shall be omitted for reasons of conciseness.
[0029] The explanations that are giving hereinafter refer to vehicle 3 in order not to limit in any way only to vehicles driving on the same lane as the ego-vehicle. However, the explanations given are valid in the very same way for any object, which can be determined in the environment of the display 1. It is particularly to be noted, that the following explanations all refer to vehicles, as here the advantageous aspects become immediately evident. Perception of moving objects in the environment of a traffic participant is more challenging than identification of static elements. Nevertheless, the invention is applicable for all objects that can be determined in the environment of the display 1.
[0030] A sensor, not illustrated in
[0031] From the sensor output the representation of the environment of the display 1 is calculated. Objects that can be determined from the sensor outputs are displayed as corresponding icons on the displayed 1. Depending on the resolution of the display 1 and the processing performance of the entire system it is also possible to use images representing the real world objects instead of icons. In the example illustrated in
[0032] According to the invention, the user of the assistance system shall be assisted in identifying a correspondence between objects perceivable outside the display 1 in the real world and their corresponding icons displayed on the display 1. In the present case, this will be explained with reference to vehicle 3 as an example.
[0033] Based on the sensor output a starting point 7 is calculated by the processor (not shown in
[0034] As it will be explained hereafter, the position of the starting point 7 on the edge 10 of the display 1 is determined such that the indicator line 8 resembles a direct connection from the real world vehicle 3 to the corresponding icon 3′. As it has been briefly discussed above, the position of the real world vehicle 3 in the coordinate system of the display unit 1, which is the relative position of the vehicle 3 to the display 1, is known.
[0035] In addition to the indicator line 8 an indicator bar 9 is displayed on the display 1 in a preferred embodiment. The indicator bar 9 extends along the outer edge 10 of the display 1. The length of extension along the edge 10 of the display 1 corresponds to horizontal dimension of the corresponding real-world object as this dimension is perceivable by a user of the assistance system. This means that for objects that have a greater distances to the display 1 only a shorter indicator bar 9 is displayed on the display 1. The same object being closer to the display 1 would be represented by an indicator bar 9 with larger extension.
[0036] Depending on the relative position of the real-world object to the display 1, the indicator bar 9 may extend over a plurality of edges 10 of the display 1, which, for example, may have a rectangular shape. In the illustrated embodiment, this can be seen by the indicator bar in the right upper corner of the display 1 representing the lorry 5. However, as it will be apparent from the following explanations, all calculations determining the starting points 7 and the endpoints of the indicator bars 9 refer to the same reference point on the display 1. This reference point is the center of the icon 6 and is the point at which all axes of the coordinate system intersect. The absolute orientation of the coordinate system of the display 1 is not relevant for the calculation of the starting point 7 and the indicator bar 9 as long as two conditions are fulfilled: [0037] a) one axis has to extend in the vertical direction so that the other 2 span a horizontal plane [0038] b) the orientation of the coordinate system is static.
[0039] For a first explanation of the inventive method, it is now referred to
[0040] As it can be seen in
[0041] Before the display 1 can output the screen as shown in
[0042] Vehicle 3 has mounted the assistance system of the present invention including at least one sensor allowing to determine the direction in which objects in the environment of the ego-vehicle are located. The position determination may be done in a first coordinate system which is in the present case indicated by the arrow pointing in the driving direction of the vehicle 3. With reference to this coordinate system an angle α is determined. Generally, the position and orientation of the display 1 and the coordinate system used for determining the position of the vehicle 3 have a fixed relation and, thus, the position of the vehicle 3 in the coordinate system of the displayed 1 can be easily determined. For the understanding of the present invention is sufficient to assume that the coordinate system of the sensor and the display 1 coordinate system coincide. It is to be noted, that this assumption is only made for an easy understanding and without limiting the generality.
[0043] As mentioned above, the coordinate system of the display 1 is arranged such that all 3 axes of the coordinate system run through the reference point corresponding to the ego-vehicle's position on the display 1, which is the origin of the coordinate system.
[0044] Once the direction in which the vehicle 3 is relative to the display 1 is known, a surface area S.sub.o is determined extending from the vertical axis y running through the reference point of the display 1 and including a direction vector d pointing at the vehicle 3.
[0045] This surface area S.sub.o intersects with display 1 and, thus, has an intersection point with one edge 10 of the display 1. In order to avoid that the surface area S.sub.o has a second intersection point, the surface area S.sub.o extends only from the vertical axis y in the direction of the direction vector d pointing at the real-world object, vehicle 3. The only requirement that must be fulfilled is that the display 1 and the vertical axis y intersect in one point, meaning that the vertical axis does not lie in the plane of the display 1.
[0046] Since the position of the icon 3′ is known in advance, and having now determined the position of the starting point 7 on the edge 10 of the display 1, the indicator line 8 can be drawn. For choosing the second end point of the indicator line 8 (tip of the arrow), a plurality of different approaches are possible: first, the indicator line 8 may connect the starting point 7 and the center of the area of icon 3′. Second, the indicator line 8 may extend along an intersection line of the surface area S.sub.o and the display 1. This intersection line necessarily extends from the starting point 7 towards the origin of the coordinate system of the display 1. Assuming that a user of the assistance system intuitively identifies his own position with the icon 6 indicating the ego-vehicle and, thus, with the position of the reference point on the display 1, this gives the most natural approach for easily identifying objects of the real world and the environment of the display 1 with the corresponding icons. The length of the indicator line 8 may, however, be chosen based on design considerations, for example, to avoid occluding other objects. Thus, the indicator line 8 may extend to the center of the icon or may only point to the boundary of the icon area.
[0047] It is to be noted that for determining the direction vector d, well known image processing techniques may be applied. For example, from the data received from the sensors, an outline of an image of the real-world object in the environment representation can be generated, and the coordinates in the center of an area of such outline can be chosen as a tip will of the direction vector. Other alternatives may be thought of as well.
[0048] In
[0049] For determining a first endpoint and a second end point of the indicator bar 9, intersection points 11, 12 of a first surface area S.sub.L a second surface area S.sub.R are determined. The calculation of the intersection points 11, 12 of the first surface area S.sub.L and second surface area S.sub.R with the edge 10 of the display 1 is similar to the calculation of the starting point 7 explained with reference to
[0050]
[0051] The sensor output is supplied to the processor 14 which, based on information contained in the sensor signal calculates a representation of the environment in a representation generation units 16. According to the invention, and based on the sensor output too, a surface calculation unit 17 calculates one or more surface areas S.sub.o, S.sub.L, S.sub.R as explained above and supplies the information on these surfaces to a starting point/endpoints calculating unit 18.
[0052] It is to be noted that the units 16, 17 and 18 may all be realized as software modules with the software being processed on the same processor 14. The “processor” 14 may also consist of a plurality of individual processors that are combined to a processing unit. Further, the coordinates of the display 1 in the coordinate system of the display 1 are known and stored in the assistance system 15. Thus, based on surface areas S.sub.o, S.sub.L, S.sub.R that are defined in the surface calculation unit 17, it is easy to calculate intersections (intersection points as well as intersection lines) between these surface areas S.sub.o, S.sub.L, S.sub.R and the surface of the display 1. It is to be noted that whenever the explanations given with respect to the present invention refer to the “display 1”, the display surface visible for a user is meant but not the entire display unit.
[0053] As it had been explained earlier already, a coordinate transformation may be made in a preprocessing step after the environment of the display 1 has been sensed by the sensors 13. Such a coordinate transformation is necessary only in case that the coordinate system used for the sensors 13 (and thus determination on the relative position of an object in the real world) and the coordinate system of the display 1 is not the same. In case that the position of the sensors 13 and the display 1 are very close, the same coordinate system may be used and conversion of the coordinates becomes unnecessary.
[0054]
[0055] First, using sensors, the environment of the display 1 is sensed in step S1. Next, in step S2, objects in the environment are determined from the sensor output. In step S3 direction vectors d, d.sub.L, d.sub.R are calculated which point to a center of the objects, or to the extrema in the horizontal direction which identify its leftmost and rightmost boundary of, for example, a traffic object.
[0056] In step S4, a surface area S.sub.o, S.sub.L, S.sub.R is determined for each of the determined direction vectors d, d.sub.L, d.sub.R. These surface areas are then used in step 5 to calculate intersection points lying on the outer edge 10 of the display 1. Finally, in step S6, for each calculated starting point 7 an indicator line 8 is displayed. In case that pairs of first endpoints and second in points are calculated from the respective surface areas S.sub.L, S.sub.R, corresponding indicator bars 9 are displayed.
the present invention it is very easy for a user of the display 1 to identify real-world objects with corresponding icons or other illustrations of the real-world objects displayed on the display 1. Thus, the time needed by the user in order to consider all relevant information for successfully and correctly assessing for example a traffic situation in dynamic environments is significantly reduced. A detrimental effect of providing additional information is thus avoided.
[0057] The preferred field of application is an integration in advanced driver assistance systems or, more general, assistance systems used in vehicles. Such systems could be using onboard mounted sensors and displays. However, standalone solutions may also be thought of. For example, a handheld device used for navigation could be equipped with respective sensors so that even such a standalone device could make use of the present invention. The latter could be great improvement of safety for pedestrians which tend to look on their mobile devices when navigating towards an unknown destination. The present invention objects perceived in the corner of the eye may easily be identified with the corresponding icons on the display. Even without looking up in order to obtain complete information on this specific object in the real world, the pedestrian may make at least a basic estimation regarding the relevance of the respective real-world object.
[0058] Specifically the combination of the indicator lines with the indicator bars allow an identification of the real-world objects with a glimpse of an eye. Calculating the surface areas as explained in detail about, all running through the origin of the coordinate system of the display 1 and thus through the reference point of the display 1, results in scaling the dimensions of the real-world objects on the edge of the display 1 which is intuitively recognized as the boundary between the “virtual world” and the “real world” using the indicator bars.