METHOD FOR VISUALIZING A CROSSING OVER A ROAD
20210370823 · 2021-12-02
Inventors
Cpc classification
B60Q1/547
PERFORMING OPERATIONS; TRANSPORTING
B60Q1/525
PERFORMING OPERATIONS; TRANSPORTING
G06V20/588
PHYSICS
B60Q1/507
PERFORMING OPERATIONS; TRANSPORTING
B60Q2400/50
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60Q1/50
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A method for visualizing a crossing over a road for a road user wanting to cross the road. At least one vehicle driving on the road stops and projects an image pattern onto the road for the road user crossing the road. The vehicle projecting the image pattern communicates with another vehicle driving on the road. The first vehicle projecting the image pattern on a first lane informs the other vehicle in at least one further lane at least indirectly about the road user wanting to cross the road and the image pattern which is projected or still to be projected, and it requests the other vehicle to stop and for its part to project an image pattern onto its lane in addition to the image pattern of the first vehicle or at least indirectly reports back to the first vehicle that it should also take over the projection on its lane.
Claims
1-8. (canceled)
9. A method for visualizing a crossing over a road for a road user wanting to cross the road, the method comprising: projecting, by a first vehicle in a first lane, an image pattern onto the road for the road user crossing the road, wherein the first vehicle projects the image pattern while stopped; and communicating, by the first vehicle, with at least one further vehicle driving on the road in a further lane by at least indirectly informing the at least one further vehicle about the road user wishing to cross the road and about the projected image pattern; requesting the at least one further vehicle to stop and to (1) project an image pattern onto the further lane in addition to the image pattern of the first vehicle or (2) at least indirectly report back to the first vehicle that the at least further vehicle will take over the projection onto the further lane, wherein the image pattern comprises at least two graphically different or differently colored states, a first one of the at least two graphically different or differently colored states indicates that the entire road can be crossed safely and a second one of the at least two graphically different or differently colored states indicates that only the lane on which the image pattern is projected can be safely crossed, and wherein the first vehicle monitors the implementation of the request to the at least one further vehicle via a feedback communication or an environment sensor system of the first vehicle, and, after the request has been followed or when it can be predicted that the request will be followed, the first vehicle adjusts a state of the image pattern.
10. The method of claim 9, wherein the communication by the first vehicle uses optical signals of the first vehicle.
11. The method of claim 10, wherein the optical signals are detected and evaluated by an environment sensor system of the at least one further vehicle.
12. The method of claim 9, wherein the communication by the first vehicle uses a vehicle-to-vehicle communication or indirectly via a vehicle-to-X and an X-to-vehicle communication.
13. The method of claim 12, wherein a communication link between the first and at least one further vehicle is terminated, starting from the first vehicle, after the road user wishing to cross the road has crossed the road.
14. The method of claim 9, wherein the image pattern further comprises a graphical depiction of a blocked area not to be entered.
15. The method of claim 9, wherein the first vehicle, in case of adaptation of the image pattern, informs the at least one further vehicle and in turn requests an analogous adaptation of the image pattern, provided that the first projects its part of the image pattern itself.
16. The method of claim 9, wherein the image pattern is projected as a zebra crossing via pixel headlights or via a laser light or laser projection.
Description
[0022] Further advantageous embodiments of the method according to the invention further arise from the remaining dependent sub-claims and become clear on the basis of the exemplary embodiments, which are described in more detail below with reference to the figures.
[0023] Here are shown:
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
DETAILED DESCRIPTION
[0031] In the depiction of
[0032] The scenario described is relatively simple, since the vehicle 4.1, which is also referred to below as the first vehicle, can detect the situation on its own and decide about the safety of the crossing on its own. However, if the road 1 now has several lanes, as is indicated in the depiction of
[0033] As the “master”, the first vehicle 4.1 now establishes communication with the further vehicles 4.2 and 4.3 and informs them about the situation, which is of particular interest in the case of the vehicle 4.2, in that there is a risk that both the projected image pattern 8 and the pedestrian 3 are so obscured by the vehicle 4.1 that a person driving the vehicle 4.2 or its environment sensor system 5 cannot perceive this accordingly. By means of the communication between the vehicle 4.1 and the vehicle 4.2, for example via a radio communication can be implemented, which is also referred to as vehicle-to-vehicle or Car2Car communication. Via such a Car2Car communication, the vehicle 4.1 can request the vehicle 4.2 to reduce its speed accordingly, to stop and also to project an image pattern 8 in the form of a zebra crossing. This situation is shown accordingly in the depiction of
[0034] The first vehicle 4.1 will now also make contact with the vehicle 4.3 in the lane 1.3 of the road. Here too, the first vehicle 4.1 informs the vehicle 4.3 accordingly. Furthermore, the first vehicle 4.1 requests the third vehicle 4.3 to also stop and in turn project a zebra crossing as image pattern 8. The depiction in
[0035] As soon as he/she has crossed the road, this can in turn be recognized accordingly by the first vehicle 4.1, which here acts as the master. It then breaks off communication with the other vehicles 4.2 and 4.3 acting as slaves, such that after the “situation” has been resolved, no further communication takes place between the vehicles 4.1, 4.2, 4.3, and the corresponding communication channels are enabled again for other communications.
[0036] The depiction in
[0037] The first vehicle 4.1 detects the will of the pedestrian 3 and initiates the pedestrian crossing. In doing so, it operates in a master mode, while the other vehicles 4.2, 4.3 are requested by the first vehicle 4.1 to cooperate and accordingly operate in a slave mode.
[0038] As already described, the vehicle 4.1 has detected a situation in which the pedestrian 3 wants to cross the road 1. The first vehicle 4.1 supports this based on its situation analysis by stopping, as already mentioned above, and displaying the image pattern 8 as a zebra crossing, in the case of road 1 with several lanes 1.1, 1.2, 1.3 with the coding “one lane safe” directly in front of the pedestrian 3 on the road, as indicated in
[0039] The vehicles 4.2 and 4.3 are in slave mode during this process. By way of example, the vehicle 4.2 has detected, either via its ambient detection and corresponding optical signals from vehicle 4.1 or a Car2Car communication, that along the planned route of travel in the example of the previous figures in lane 1.2, a pedestrian crossing initiated by the first vehicle 4.1 is planned. According to the distance to this pedestrian crossing initiated by the first vehicle 4.1, the inherent speed of the second vehicle 4.2 is reduced and, after the location of the projection has been detected, the corresponding connection projection on the lane 1.2 is triggered, as is depicted accordingly in
[0040] The request from the first vehicle 4.1 to the vehicles 4.2 and 4.3 to switch the state of the image pattern 8 from “one lane safe” to “road safe”, i.e., to change the color from yellow to green, as described in the exemplary embodiment depicted here, can again be made via the Car2Car communication, or also by an environment sensor system of the other vehicles 4.2 and 4.3 or the fact that a person driving these vehicles has detected that the first vehicle 4.1 has switched its part of the zebra crossing as an image pattern to green. As mentioned, this can be detected by the person or the environment sensor system and serve as a request to also switch the state of the zebra crossing from yellow to green.
[0041] In the depiction of
[0042] In the depiction of
[0043] Finally, the depiction in
[0044] The described method in its embodiment variants functions in particular amongst autonomously driving vehicles. However, it is also possible to include non-autonomously driving vehicles or vehicles that are only partially autonomous by means of corresponding driver assistance systems. By way of example, an autonomous vehicle can be operated as the first vehicle 4.1 in master mode, and the non-autonomous vehicle in slave mode. The non-autonomous vehicle needs either an attentive driver, as in today's traffic, who then cooperates accordingly, and thus effectively stops in slave mode. This can then be detected by the first vehicle 4.1 and its environment sensors. If the non-autonomous vehicle, for example the vehicle 4.2, has assistance systems that can detect the situation via Car2Car or via environment sensing 5, then a corresponding display can also be presented to the driver for assistance in order to prompt him/her to act cooperatively. This is done by the display module 14 optionally indicated in
[0045] In any case, it is the case now that such a non-autonomous vehicle, for example the vehicle 4.2, if it has a pixel light or a laser light, can use it to project the corresponding image pattern 8. This projection must then be manually triggered by the person driving the vehicle 4.2. If he/she is unable to do so, this can be indicated via a feedback communication in the case of Car2Car communication, for example. It can also be detected via the first vehicle 4.1 in the master mode that the further vehicle 4.2 has stopped accordingly, but is not projecting a zebra crossing as image pattern 8. If the projection area of its pixel headlights 7 or its laser light or its laser projection is sufficient, it can then also take over the projection for this non-autonomous vehicle if it has reliably detected via the communication or its environment sensors that this vehicle is stationary.
[0046] In the reverse case, where the non-autonomous vehicle is in master mode, an autonomous vehicle, for example the vehicle 4.2 and/or 4.3, can then be informed of the situation in slave mode via the communication, for example via the environmental sensors 5 or a Car2Car communication, and can react to this in the manner already described above, since it is irrelevant for the vehicle in the slave mode whether the vehicle is operated autonomously or non-autonomously in the master mode.
[0047] All in all, this results in the possibility of an extraordinarily comfortable and safe crossing of a road 1 with all its lanes 1.1, 1.2, 1.3 for a pedestrian 3.
[0048] Although the invention has been illustrated and described in detail by way of preferred embodiments, the invention is not limited by the examples disclosed, and other variations can be derived from these by the person skilled in the art without leaving the scope of the invention. It is therefore clear that there is a plurality of possible variations. It is also clear that embodiments stated by way of example are only really examples that are not to be seen as limiting the scope, application possibilities or configuration of the invention in any way. In fact, the preceding description and the description of the figures enable the person skilled in the art to implement the exemplary embodiments in concrete manner, wherein, with the knowledge of the disclosed inventive concept, the person skilled in the art is able to undertake various changes, for example, with regard to the functioning or arrangement of individual elements stated in an exemplary embodiment without leaving the scope of the invention, which is defined by the claims and their legal equivalents, such as further explanations in the description.