DRIVER ASSISTANCE SYSTEM FOR DETERMINING A POSITION OF A VEHICLE

20190226851 ยท 2019-07-25

    Inventors

    Cpc classification

    International classification

    Abstract

    The invention relates to a driver assistance system for determining a position of a vehicle. The driver assistance system comprises a processing unit and a first positioning system for providing first information about the position of the vehicle. The driver assistance system further comprises a second positioning system for providing second, visual information about the position of the vehicle. The second positioning system is configured to provide the second, visual information about the position of the vehicle based on a comparison between image data of an image generated by an on-board camera of the vehicle and image data of an image stored in a database using a visual bag of words technique. The processing unit is configured for determining the position of the vehicle based on the first information about the position of the vehicle and on the second, visual information about the position of the vehicle.

    Claims

    1. A driver assistance system for determining a position of a vehicle, comprising: a processing unit; a first positioning system for providing first information about the position of the vehicle; a second positioning system for providing second information about the position of the vehicle; wherein the second positioning system is configured to provide the second information about the position of the vehicle based on a comparison between image data of an image generated by an on-board camera of the vehicle and image data of an image stored in a database using a visual bag of words technique; wherein the processing unit is configured for determining the position of the vehicle based on the first information about the position of the vehicle and on the second, visual information about the position of the vehicle.

    2. The driver assistance system according to claim 1, wherein the first information about the position of the vehicle is provided by a satellite navigation system and/or based on a position estimation of the vehicle using dead-reckoning.

    3. The driver assistance system according to claim 1, wherein the comparison between the image data of the image generated by the on-board camera of the vehicle and the image data of the image stored in the database comprises a comparison between features of the image generated by the on-board camera of the vehicle and corresponding features of the image stored in the database.

    4. The driver assistance system according to claim 1, wherein the second positioning system is configured for selecting features from the image stored in the database in order to generate a first group of image features; and wherein the second positioning system is configured for selecting features from the image generated by the on-board camera of the vehicle in order to generate a second group of image features.

    5. The driver assistance system according to claim 4, wherein the second positioning system is configured for allocating first feature descriptors to the first group of image features using similarity criteria, the first feature descriptors being representative for the first group of image features of the image stored in the database; and wherein the second positioning system is configured for allocating second feature descriptors to the second group of image features using similarity criteria, the second feature descriptors being representative for the second group of image features of the image generated by the on-board camera of the vehicle.

    6. The driver assistance system according to claim 5, wherein a generalized feature descriptor is obtained during a learning process which is based on a plurality of images.

    7. The driver assistance system according to claim 6, wherein the second positioning system is configured for allocating the first feature descriptors to the generalized feature descriptor; and/or wherein the second positioning system is configured for allocating the second feature descriptors to the generalized feature descriptor.

    8. The driver assistance system according to claim 7, wherein the second positioning system is configured for determining a similarity between the image stored in the database and the image generated by the on-board camera of the vehicle based on a determined number of first feature descriptors allocated to the generalized feature descriptor and based on a determined number of second feature descriptors allocated to the generalized feature descriptor.

    9. The driver assistance system according to claim 7, wherein the second positioning system is configured for determining the similarity between the image stored in the database and the image generated by the on-board camera of the vehicle using a Euclidean distance or a cosine similarity of histograms.

    10. The driver assistance system according to claim 8, wherein the determined similarity between the image stored in the database and the image generated by the on-board camera of the vehicle represents the second, visual information about the position of the vehicle which is combined with the first information about the position of the vehicle by the processing unit in order to determine the position of the vehicle.

    11. A method for determining a position of a vehicle, comprising the steps: Providing first information about the position of the vehicle by a first positioning system; Providing second information about the position of the vehicle by a second positioning system, wherein the second information about the position of the vehicle is provided based on a comparison between image data of an image generated by an on-board camera of the vehicle and image data of an image stored in a database using a binary representation of image features and a visual bag of words technique; Determining the position of the vehicle by a processing unit based on the first information about the position of the vehicle and on the second information about the position of the vehicle.

    12. The driver assistance system according to claim 9, wherein the determined similarity between the image stored in the database and the image generated by the on-board camera of the vehicle represents the second information about the position of the vehicle which is combined with the first information about the position of the vehicle by the processing unit in order to determine the position of the vehicle.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0054] FIG. 1 shows a flow diagram for a method for determining a position of a vehicle according to an embodiment of the invention.

    [0055] FIG. 2 shows a driver assistance system for determining a position of a vehicle according to an embodiment of the invention.

    [0056] FIG. 3 shows an image generated by an on-board camera of a vehicle and an image stored in a database according to an embodiment of the invention.

    DETAILED DESCRIPTION

    [0057] The method for determining the position of the vehicle 30 comprises three main steps. In a first step S11, a first information 50 about the position of the vehicle 30 is provided by a first positioning system 11. In a second step of the method S12, second, visual information 35 about the position of the vehicle 30 is provided by a second positioning system 12, wherein the second, visual information 35 about the position of the vehicle 30 is based on a comparison S5 between image data 36 of an image generated by an on-board camera of the vehicle 30 and image data 41 of an image stored in a database 40. In a third step S33 of the method, the position of the vehicle 30 is determined by a processing unit 33, which may also be the localization module of the vehicle 30, based on the first information 50 about the position of the vehicle 30 and on the second, visual information 35 about the position of the vehicle 30.

    [0058] Generally, the vehicle 30 provides its position and uncertainty area to the server unit 20 wherein the server unit 20 uses this information to download candidate images from the database 40 which for instance is a street view server. The candidate images from the database 40 are hashed and then sent to the vehicle 30. The vehicle 30 generates hashes from the images generated by its on-board camera or on-board cameras and compares them with the images provided by the server unit 20. The resulting visual similarity scores and the associated candidate positions are fed to the vehicle localization module, e.g. the processing unit 33.

    [0059] In detail, the vehicle 30 transmits in a step S1 its approximate position to the server unit 20. Approximate position means that uncertainties may be incorporated in the position information provided by the vehicle 30. This information may be the first information 50 about the position of the vehicle 30 provided by the first positioning system 11, e.g. a satellite navigation system. The first positioning system 11 may also be configured for providing an estimation of the position of the vehicle 30. In particular, the first information 50 provided by the first positioning system 11 may be derived from a position estimation of the vehicle 30, for example using dead-reckoning.

    [0060] In a step S1, a position based query or a request is directed to the database 40. This request refers to the download of images from the database 40, wherein the images to be downloaded represent stored images corresponding to the current location of the vehicle 30 which was provided by the first positioning system 11. The server unit 20 downloads from the database 40 the images contained within the uncertainty area of the vehicle 30 along with their geographical positions. For example, the images of the database 40, e.g. the downloaded image data 41 of the database 40, are based on a street view server like for example Google Street View. The download of the images from the database 40 is conducted in a step S2.

    [0061] In a step S3, the server unit 20 generates the corresponding image hashes and transmits them to the vehicle 30 along with the global positions associated with the images.

    [0062] In a step S4, the vehicle 30 generates hashes from the image data 36 of the images generated by the on-board camera of the vehicle 30.

    [0063] In a step S5, the hashes generated by the vehicle 30 and the hashes generated by the server unit 20 are compared. The result is a visual similarity score between the current on-board camera view and the image candidates within the uncertainty area provided by the server unit 20. In particular, the result of the comparison between the image data 36 of the image generated by the on-board camera of the vehicle 30 and the image data 41 of the image stored in the database 40 results in the second, visual information 35.

    [0064] This second, visual information 35, e.g. the visual similarity scores, are sent to the vehicle localization module 33 along with the geographical position information of the image candidates provided by the server unit 20. The vehicle localization module 33 uses this information possibly fusing it with other localization data to provide highly improved vehicle position estimations. In particular, in step S12, the visual information 35 about the position of the vehicle 30 is provided by the second positioning system 12 and other position information, for example the first information 50 about the position of the vehicle 30 may be provided by the first positioning system 11 such that the first information 50 and the second, visual information 35 are fused or combined in order to determine the position of the vehicle in a step S33. This means that the first information 50 of the first positioning system 11 may be provided before and/or after the comparison of the images of the database 40 and the images of the on-board camera of the vehicle 30, i.e. the first information 50 of the first positioning system 11 may be provided in step S1 and/or in step S33.

    [0065] FIG. 2 shows a driver assistance system 10 comprising the vehicle 30, the server unit 20 and the database 40. It is possible that a wireless data transfer between the vehicle 30 and the server unit 20 as well as between the server unit 20 and the database 40 is provided. However, it is also possible that the server unit 20 which generates the image hashes based on the images of the database 40 is located within the vehicle 30. The vehicle 30 comprises the processing unit 33 which is identified as the vehicle localization module in the sense of the present invention. The vehicle localization module 33 may however also be separately located with respect to the vehicle 30. The vehicle 30 further comprises a communication unit 32 which communicates in a wireless manner with the server unit 20 receiving the image hashes from the server unit 20 in order to compare the image hashes from the server unit 20 with image hashes generated by the on-board camera 31 of the vehicle 30. This comparison may be conducted in the vehicle localization module 33, e.g. the processing unit 33.

    [0066] The on-board camera 31 of the vehicle 30 may be a stereo camera comprising a first camera 31a and a second camera 31b in order to generate images which, after hashing, are sent to the vehicle localization module 33.

    [0067] FIG. 3 shows an image 60 generated by the on-board camera 31 of the vehicle 30 and an image 70 stored in the database 40. The image 60 generated by the on-board camera 31 represents a scenario within an environment around the vehicle 30. Within the image 60, some of the image features 61 marked with a cross are selected in order to form a second group 62 of image features 61, for example within a certain region of the image 60 as shown in FIG. 3. Similarly, some of the image features 71 also marked with a cross in the image 70 stored in the database 40 are selected to generate a first group 72 of image features 71. The first group 72 is allocated to the first feature descriptors being representative for the first group 72 of image features 71. The frequency of occurrence of the first feature descriptors for the image 70 stored in the database 40 as well as a frequency of occurrence of the second feature descriptors for the image 60 generated by the on-board camera 31 of the vehicle 30 is determined by the second positioning system 12.

    [0068] Afterwards, a similarity between the image 70 stored in the database 40 and the image 60 generated by the on-board camera 31 of the vehicle 30 is determined based on a comparison between the frequency of occurrence of the first feature descriptor for the image 70 stored in the database 40 and the frequency of occurrence of the second feature descriptor for the image 60 generated by the on-board camera 31 of the vehicle 30.

    [0069] The frequency of occurrence of the feature descriptors, e.g. the number of allocations of the feature descriptors to the generalized feature descriptor form the visual vocabulary, represents the respective image hashes of the image 60 and the image 70. The similarity score for the image 70 stored in the database 40 and the image 60 generated by the on-board camera 31 of the vehicle 30 is provided to establish the second, visual information 35. In other words, a similarity score is conducted between the first and the second image hash and the first image hash represents the frequency of occurrence of the first feature descriptor of the image 70 stored in the database 40 and the second image hash represents the frequency of occurrence of the second feature descriptor of the image 60 generated by the on-board camera 31 of the vehicle 30.

    [0070] While the invention has been illustrated and described in detail in the drawings and the foregoing description, such illustration and description are to be considered illustrative and exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art and practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims the term comprising does not exclude other elements, and the indefinite article a or an does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope of protection.