VISUAL GUIDANCE SYSTEM FOR PARKING ASSISTANCE

20230059075 · 2023-02-23

    Inventors

    Cpc classification

    International classification

    Abstract

    The disclosure relates to a method and system for providing visual guidance to a driver of a vehicle. The method comprises detecting, by a plurality of sensors of the vehicle, sensor data representing a vehicle environment; dynamically displaying, by a display of the vehicle, a visual representation of the vehicle environment; overlaying the visual representation with one or more first visual elements representing a proposed trajectory of a maneuver of the vehicle within the vehicle environment.

    Claims

    1. A method for providing visual guidance to a driver of a vehicle, the method comprising: detecting, by a plurality of sensors of the vehicle, sensor data representing a vehicle environment; dynamically displaying, by a display of the vehicle, a visual representation of the vehicle environment; and overlaying the visual representation of the vehicle environment with one or more first visual elements representing a proposed trajectory of a maneuver of the vehicle within the vehicle environment.

    2. The method of claim 1, wherein the proposed trajectory of the maneuver of the vehicle comprises a current position of the vehicle and a final position of the maneuver.

    3. The method of claim 1, wherein the visual representation of the vehicle environment includes at least one of: a top view of the vehicle within the vehicle environment, and a three-dimensional view of the vehicle within the vehicle environment.

    4. The method of claim 1, wherein objects of the vehicle environment external to the vehicle remain static within the displayed visual representation of the vehicle environment when updating the displayed visual representation of the vehicle environment during a movement of the vehicle.

    5. The method of claim 1, wherein the method further comprises overlaying, simultaneously with the one or more first visual elements representing the proposed trajectory of the maneuver of the vehicle, the visual representation of the vehicle environment with one or more second visual elements to assist the driver in performing the maneuver.

    6. The method of claim 5, wherein the one or more second visual elements are indicative of a current trajectory of the vehicle relative to the proposed trajectory of the maneuver of the vehicle.

    7. The method of claim 5, wherein the one or more second visual elements indicate at least one of: if the driver shall change a current trajectory of the vehicle; how the driver shall change the current trajectory of the vehicle; if the driver shall increase or decrease a velocity of the vehicle; how the driver shall increase or decrease the velocity of the vehicle; if the driver shall change a gear of the vehicle; and how the driver shall change the gear of the vehicle.

    8. The method of claim 7, wherein the current trajectory includes a direction of the vehicle.

    9. The method of claim 1, wherein the method further comprises: displaying a driver interface for selecting a maneuver of a plurality of predetermined maneuvers within the displayed visual representation of the vehicle environment; determining, based on a driver input, the maneuver of the plurality of predetermined maneuvers; and determining the proposed trajectory of the maneuver of the plurality of predetermined maneuvers.

    10. The method of claim 1, wherein the method further comprises: determining the maneuver based on at least one of: the detected sensor data representing the vehicle environment; and a user input; determining, in response to the determination of the maneuver, the proposed trajectory of the maneuver of the vehicle based on the detected sensor data representing the vehicle environment; and displaying the first visual element representing the proposed vehicle trajectory, in particular wherein the first visual element indicate a deviation of the current vehicle motion from the proposed trajectory displaying the one or more first visual elements representing the proposed trajectory of the maneuver of the vehicle.

    11. The method of claim 10, wherein the maneuver is a parking maneuver.

    12. The method of claim 10, wherein the one or more first visual elements indicate a deviation of a current vehicle motion from the proposed trajectory of the maneuver of the vehicle.

    13. The method of claim 5, wherein at least one of the one or more first visual elements and the one or more second visual elements is indicative of an accumulated driver performance.

    14. The method of claim 1, where the method further comprises: determining a travelled trajectory of the vehicle for one or more maneuvers performed by the vehicle; comparing the travelled trajectory of the vehicle to the proposed trajectory of the maneuver of the vehicle; determining one or more scores based on the comparison of the trajectories; storing the one or more scores; and determining an accumulated driver performance based on the one or more scores.

    15. The method of claim 1, wherein the method further comprises at least one of: generating one or more warnings if a speed of the vehicle is above a first predetermined threshold speed; deactivating the display of the proposed trajectory of the maneuver of the vehicle if the speed of the vehicle is above a second predetermined threshold speed for at least a predetermined duration; and deactivating the display of the proposed trajectory of the maneuver of the vehicle if the speed of the vehicle is above a third predetermined threshold speed, wherein the third predetermined threshold speed is larger than the first predetermined threshold speed.

    16. The method of claim 1, wherein the method further comprises at least one of: generating one or more warnings if a positional deviation of the vehicle with respect to the proposed trajectory is above a first predetermined threshold deviation; deactivating the display of the proposed trajectory of the maneuver of the vehicle if the positional deviation of the vehicle is above a second predetermined threshold deviation for at least a predetermined duration; and updating the one or more first visual elements representing the proposed trajectory of the maneuver of the vehicle if the positional deviation of the vehicle is above a third predetermined threshold deviation, where in the third predetermined threshold deviation is larger than the first predetermined threshold deviation.

    17. A method for providing visual guidance to a driver of a vehicle, the method comprising: detecting, by a plurality of sensors of the vehicle, sensor data representing a vehicle environment; dynamically displaying, by a display of the vehicle, a visual representation of the vehicle environment; detecting one or more objects, by the plurality of sensors of the vehicle, in a vicinity of the vehicle; and displaying one or more visual elements if a distance between the vehicle and the one or more objects is below a predetermined threshold distance, wherein the one or more visual elements comprise at least one of: one or more camera views of the one or more objects; and an indication of the distance to the one or more objects.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0031] The features, objects, and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference numerals refer to similar elements.

    [0032] FIG. 1A depicts a flow chart of a first method for providing visual guidance to a driver of a vehicle;

    [0033] FIGS. 1B-1C depict a respective flow chart of further optional steps of the method for providing visual guidance to a driver of a vehicle;

    [0034] FIG. 2 depicts a live top view of a vehicle in a vehicle environment;

    [0035] FIG. 3 depicts an example of displayed first and second visual elements indicating that the driver shall adjust the direction of the vehicle;

    [0036] FIG. 4 depicts an example of displayed first and second visual elements indicating that the driver shall adjust the velocity and gear of the vehicle;

    [0037] FIGS. 5A-5B depict a flow chart of a respective method for generating a warning or deactivating the display of visual guidance;

    [0038] FIG. 6 depicts a flow chart of a second method for providing visual guidance to a driver of a vehicle according to another embodiment;

    [0039] FIG. 7 depicts an example of displayed third visual elements;

    [0040] FIG. 8 depicts a block diagram of a system to provide visual guidance to a driver of a vehicle.

    DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

    [0041] FIG. 1 shows a flow chart of a method 100 for providing visual guidance to a driver of a vehicle. In step 102, sensor data are detected by a plurality of sensors of the vehicle. The sensor data are representative of a vehicle environment. For example, a map may be created using data captured by all or a part of available sensors in the vehicle. Possible sensors may comprise ultrasonic sensors, cameras through computer vision (classical or neural network), LIDAR or laser scanners. The vehicle environment may comprise the vehicle itself.

    [0042] In step 104, a visual representation of the vehicle environment is dynamically displayed by a display of the vehicle. The visual representation is based on the sensor data. In a preferred embodiment, the visual representation is a top view and/or a three-dimensional view of the vehicle within the vehicle environment. Thereby, a surround overview of the vehicle environment is provided. The top view/three-dimensional view may be created through stitching of several camera images, for example captured by cameras at the front, left, right, rear of the vehicle. FIG. 2 depicts such an exemplary top view of a vehicle in a vehicle environment. In a preferred embodiment, the vehicle environment may be displayed including information about real objects in the vicinity of the vehicle. For example, the visual representation may comprise information that the vehicle is neighbored by a wall on one side and grass on the other side. The visual presentation may be an image or a live view. For example, changing conditions can be displayed at real time and the visual guidance may be adjusted accordingly. The visual representation of the vehicle environment may comprise respective visual representation of the vehicle and of objects external to the vehicle, such as other vehicles. According to an embodiment, objects of the displayed vehicle environment external to the vehicle remain static within the displayed visual representation of the vehicle environment when updating the displayed visual representation during a movement of the vehicle. In other words, a birds-eyes view of the vehicle environment and the maneuver is provided. Put in yet another way, a movement of the vehicle is shown within the displayed visual representation of the vehicle environment by changing the position of a visual representation of the vehicle within the environment rather than by changing the position of a visual representation of an object external to the vehicle.

    [0043] Returning to step 106 of FIG. 1, the visual representation is overlaid with one or more first visual elements representing a proposed trajectory of a maneuver of the vehicle within the vehicle environment. The proposed trajectory may be calculated based on the detected and/or calculated map of the vehicle environment. In an embodiment, the proposed trajectory may be an optimum trajectory of the maneuver. According to a preferred embodiment, the proposed trajectory comprises a current position of the vehicle, representing for example a starting position of the maneuver, and a final position of the maneuver. In other words, the complete trajectory of the maneuvers is shown rather than parts of the trajectory.

    [0044] According to a preferred embodiment, in step 108, the visual representation is overlaid, simultaneously with the first visual elements, with one or more second visual elements to assist the driver in performing the maneuver. For example, in a preferred embodiment, the first visual element may represent the proposed trajectory and the one or more second visual elements may indicate the current position and/or the currently anticipated trajectory of the vehicle. The second visual elements may include indications of proposed amendments of the vehicle position and/or trajectory to the driver, for example if and/or how the driver should adjust the vehicle movement. According to an embodiment, the one or more second visual elements are indicative of a current trajectory of the vehicle relative to the proposed trajectory.

    [0045] FIG. 3 depicts an example of first and second visual elements indicating that the driver shall adjust the direction of the vehicle. In FIG. 3, the first visual element is a dotted line, which represents the proposed trajectory. The shaded (or colored) queues are the second visual elements, which represent the deviation of the current vehicle position to the proposed vehicle. The current trajectory may be determined based on the current vehicle position and a predicted vehicle trajectory based on the preceding movement and/or the current position of the wheels of the vehicle. The current trajectory may be implemented to the detected map of the vehicle environment and therein compared to the proposed trajectory. The second visual elements may indicate if there is a deviation between these trajectories and may additionally or alternatively indicate how the driver should adjust the current vehicle trajectory. For example, the shade (or color) of the queues indicates the magnitude of the suggested changes of the movement of the vehicle. For example, the position of the second visual element may indicate that the driver should adjust the vehicle movement to the left or to the right.

    [0046] FIG. 4 depicts an example of first and second visual elements indicating that the driver shall adjust the velocity and gear of the vehicle. In FIG. 4, the first visual element is a dotted line representing the proposed trajectory. The second visual element is an arrow. The length of the arrow indicates the gap between the current speed and a proposed speed, in other words the magnitude of the suggested changes of the motion of the vehicle. The direction of the arrow indicates the direction of the proposed vehicle motion, for example forward or backwards.

    [0047] According to an embodiment, the first or second visual element is indicative of an accumulated driver performance. Thereby, the visual guidance may be tailored to the driver of the car by creating an individual profile of each driver of the vehicle. Based on such a profile, the second visual element may indicate proposed changes to the current vehicle movement during a current vehicle maneuvers based on the previous performance of the driver. For example, if the driver needed many adjustments during previous maneuvers, the second visual elements may comprise more detailed indications for vehicle movement adjustments than in a scenario where the driver needed fewer adjustments during previous maneuvers.

    [0048] Therefore, according to an embodiment, in step 110, a travelled trajectory of the vehicle for one or more maneuvers performed by the vehicle is determined. In step 112, the travelled trajectory is compared to the proposed trajectory of the respective maneuver. Based on this comparison, one or more scores are determined in step 114 and stored in step 116. For example, if the deviation between the travelled and proposed trajectory is small, the maneuver may be labeled with a good score and if the deviation between the travelled and proposed trajectory is large, the maneuver may be labeled with a poor score. In step 118, an accumulated driver performance based on the one or more scores of the one or more maneuvers is determined. For example, the accumulated driver performance may be an average of the scores. The accumulated driver performance may be used during future vehicle maneuvers performed by the driver, i.e. more or less guidance may be provided if the accumulated driver performance is poor or good, respectively. The accumulated driver performance may be displayed.

    [0049] FIG. 1B shows an embodiment of the method, wherein, prior to overlaying the visual representation of the vehicle environment with a proposed trajectory of the maneuver, a driver interface for selecting at least one of a plurality of predetermined maneuvers within the displayed environment is displayed in step 120. The plurality of predetermined maneuvers may be determined based on the sensor data and/or a map representing the vehicle environment, the map being created based on the sensor data. The driver is presented with the possible maneuvers. The display may comprise an indication of one preferred maneuver, for example one optimum maneuver. For example, the plurality of predetermined maneuver may comprise several parking maneuvers for several parking slots in the vehicle environment. The display may comprise a prompt for the driver to select one maneuver. Thereby, the driver may choose a maneuver s/he prefers, for example the quickest or easiest maneuver. Based on the driver’s selection, the chosen maneuver of the plurality of predetermined maneuvers is determined in step 122 and the proposed trajectory of this maneuver is determined in step 124.

    [0050] FIG. 1C shows an alternative embodiment of the method, wherein, prior to overlaying the visual representation of the vehicle environment with a proposed trajectory of the maneuver, in step 126, the maneuver based on the detected vehicle environment and/or a user input is determined. In a preferred embodiment, the maneuver is a parking maneuver. For example, a parking maneuver into a certain parking slot, based on the vehicle environment and/or the driver’s selection, is determined. In step 128, the proposed trajectory of the maneuver is determined in response to the determination of the maneuver and based on the detected vehicle environment.

    [0051] FIG. 5 depict flow charts of methods 500 for generating a warning or deactivating and/or updating the display of visual guidance. According to a preferred embodiment in FIG. 5A, the current vehicle speed during the maneuver is determined in step 502. If the speed of the vehicle is above a first predetermined threshold speed (step 504), one or more warnings are generated and displayed in step 506. If the speed of the vehicle is above a second predetermined threshold speed for at least a predetermined duration (step 508), the display of the proposed trajectory is deactivated in step 510. Additionally or alternatively, if the speed of the vehicle is above a third predetermined threshold speed larger than the first threshold speed (step 512), the display of the proposed trajectory is deactivated in step 514.

    [0052] According to an embodiment in FIG. 5B, the current positional deviation between the vehicle position and the proposed position on the proposed trajectory are determined in step 516. If the positional deviation of the vehicle to the proposed trajectory is above a first predetermined threshold deviation (step 518), one or more warnings are generated and displayed in step 520. If the positional deviation of the vehicle is above a second predetermined threshold deviation for at least a predetermined duration (step 522), the display of the proposed trajectory is deactivated in step 524. Additionally or alternatively, if the positional deviation of the vehicle is above a third predetermined threshold deviation larger than the first threshold deviation (step 526), the proposed trajectory is updated in step 528. For example, a new trajectory based on the new starting position is determined and displayed.

    [0053] FIG. 6 depicts a flow chart of a method 600 for providing visual guidance to a driver of a vehicle according to another embodiment. In step 602, sensor data representing a vehicle environment are detected by a plurality of sensors of the vehicle. Possible sensors may comprise ultrasonic sensors, cameras through computer vision (classical or neural network), LIDAR or laser scanners. In step 604, a visual representation of the vehicle environment is dynamically displayed by a display of the vehicle. For example, a map may be created using data captured by all or a part of available sensors in the vehicle. An image or live view of the environment is displayed on a display of the vehicle. In step 608, one or more objects in the vicinity of the vehicle are detected by the plurality of sensors of the vehicle. Based on the map of the vehicle environment, distances between the external objects and the vehicle may be calculated. In step 610, one or more visual elements are displayed, if the distance between the vehicle and the detected one or more objects is below a predetermined threshold distance. In a preferred embodiment, the one or more visual elements comprise camera views of the one or more objects and/or indicate the distance to the one or more objects. For example, the visual elements may include zoom-ins of the region of interest and marked distances between vehicle and object. FIG. 7 depicts an example of the visual guidance including third visual elements in the form of zoom-ins and marked distances.

    [0054] FIG. 8 depicts a block diagram of a system 800 to provide visual guidance to a driver of a vehicle. The system comprises one or more sensors 802; one or more displays 804; and a computing device 806. The system is configured to execute the above embodiments of the method. All properties of the method of the present disclosure also apply to the system. In a preferred embodiment, the sensors 802 comprise a surround view camera system. Possible further sensors 802 may comprise ultrasonic sensors, cameras through computer vision (classical or neural network), LIDAR or laser scanners. In a preferred embodiment, the one or more displays 804 are displays of the vehicle.