Device and method for providing information data with respect to an object of a vehicle environment contained in a video image stream
10227042 ยท 2019-03-12
Assignee
Inventors
Cpc classification
B60R1/27
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/80
PERFORMING OPERATIONS; TRANSPORTING
G06V20/56
PHYSICS
G01C21/3647
PHYSICS
B60R2300/302
PERFORMING OPERATIONS; TRANSPORTING
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
B60K35/28
PERFORMING OPERATIONS; TRANSPORTING
B60K35/10
PERFORMING OPERATIONS; TRANSPORTING
International classification
H04N7/18
ELECTRICITY
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A camera surround view system for a vehicle includes at least one vehicle camera, which provides camera images. The camera images are processed by a data processing unit in order to produce a live video image stream, which is displayed on a display unit. The camera surround view system loads local cloud information data regarding an object shown in the live video image stream from a data network and displays the obtained local cloud information data as an overlay in the displayed live video image stream.
Claims
1. A camera surround view system for a vehicle comprising at least one vehicle camera, which provides camera images, which are processed by a data processing unit in order to produce a live video image stream, which is displayed on a display unit, wherein the camera surround view system loads local cloud information data regarding an object shown in the live video image stream from a data network and displays the obtained local cloud information data as an overlay in the displayed live video image stream, wherein the data processing unit calculates a pointer from camera coordinates of the selected object, shown in the live video image stream, to GPS coordinates and sends a request via a wireless connection, in order to receive the cloud information data associated with the object from a database of the data network, and a data memory temporarily stores the local cloud information downloaded over a wireless interface.
2. The camera surround view system according to claim 1, wherein the local cloud information data are downloadable from the data network as a function of a current position, direction of travel, and/or speed of travel of the vehicle and the selected object.
3. The camera surround view system according to claim 2, wherein the selected object of interest is selected by a driver of the vehicle or a passenger located in the vehicle via a user interface.
4. The camera surround view system according to claim 3, wherein the object of interest is selected by the driver or passenger, by the driver or passenger touching the object displayed on the display unit.
5. The camera surround view system according to claim 4, further comprising a gateway providing a wireless connection to the data network.
6. The camera surround view system according to claim 5, wherein the current position, direction of travel, and/or speed of travel are provided by a GPS unit of the vehicle.
7. The camera surround view system according to claim 1, wherein the surround view system receives a list of GPS coordinates and associated local cloud information data from the database of the data network via the wireless connection and calculates a pointer from the GPS coordinates to the camera coordinates, at the position of which the local cloud information data are placed in the live video image stream.
8. A method for providing information data with respect to an object of a vehicle environment of a vehicle contained in a video image stream, comprising: generating camera images of the vehicle environment with at least one vehicle camera of the vehicle; processing the produced camera images to generate a live video image stream of the vehicle environment; displaying the generated live video image stream on a display unit of the vehicle, wherein cloud information data regarding a selected object in the live video image stream are displayed as an overlay in the live video image stream; calculating a pointer from camera coordinates of the selected object, shown in the live video image stream, to GPs coordinates; sending a request via a wireless connection in order to receive the cloud information data associated with the object from a database, wherein the local cloud information is downloaded over a wireless interface and is temporarily stored in a data memory.
9. The method according to claim 8, further comprising downloading local cloud information data from a data network as a function of a current position, direction of travel, and/or speed of travel of the vehicle and the selected object.
10. The method according to claim 9, wherein the selected object of interest is selected by a driver of the vehicle or a passenger located in the vehicle via a user interface.
11. The method according to claim 10, wherein the object of interest is selected by the driver or passenger, by the driver or passenger touching the object displayed on the display.
12. The method according to claim 8, wherein via the wireless connection a list of GPS coordinates and associated local cloud information data are transmitted from the database of the data network and a pointer is calculated from the GPS coordinates to the camera coordinates, at the position of which the local cloud information data are placed in the live video image stream.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) In the following, possible embodiments of the method and the device for providing information data with respect to an object of a vehicle environment of a vehicle contained in a video image stream are explained in more detail with reference to the attached figures.
(2) These show as follows:
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION
(7) As can be seen in
(8) The data processing unit 4 is connected via data cables or a data bus to a gateway 6 of the vehicle. The data bus is for example an Ethernet data bus. In the embodiment shown, the gateway 6 is connected via a further vehicle bus, for example a CAN bus, with a GPS unit 7 of the vehicle, which supplies GPS coordinates on the current position of the vehicle 1. The GPS unit 7 can be part of the navigation system of the vehicle 1. The gateway 6 is connected via a wireless interface, for example a mobile radio interface, with a data network 8. The mobile interface can for example be a UMTS interface. The data network 8 has one or more databases 9, in which cloud information data is stored. The system shown in
(9) In a possible embodiment of the camera surround view system 2, the object of interest, for example a building on the side of the street, is selected by a driver of the vehicle 1 or a passenger located in the vehicle via a user interface. In a possible embodiment, the user interface forms the display unit 5 as a touchscreen. In this embodiment, the object of interest is selected by the driver or passenger touching the object displayed on the display 5.
(10) In a possible embodiment of the camera surround view system 2, the data processing unit 4 of the surround view system 2 calculates a pointer from camera coordinates of the selected object, which is shown in the live video image stream on the display unit 5, to GPS coordinates and then generates a request, which is sent via the wireless connection, in order to receive the cloud information data associated with the selected object from the database 9 of the data network 8. In a possible embodiment, the data processing unit 4 of the surround view system 2 receives, via the wireless connection, in response to the query, a list of GPS coordinates and associated local cloud information data from the database 9 of the data network 8 and then calculates a pointer from the GPS coordinates to the camera coordinates, at the position of which the local downloaded cloud information data are placed in the live video image stream by the data processing unit 4.
(11) A number of variants of the camera surround view system 2 are possible. In the embodiment shown in
(12) In possible applications the insertion of the cloud information data in the live video image stream preferably takes place in real time with the minimum possible time delay. In these applications the data processing is preferably performed by the data processing unit 4 within the camera surround view system 2. In other applications the local cloud information data can also be inserted with a slight time delay in the live video image stream, so that in these applications data preparation by a data processing unit outside of the vehicle 1 is also possible.
(13)
(14) In a first step S1 initially camera images are produced by at least one vehicle camera of the vehicle 1, representing a vehicle environment of the vehicle. In a possible embodiment the vehicle has a plurality of vehicle cameras, representing the entire vehicle environment, e.g. 360?, around the vehicle.
(15) In a further step S2 the produced camera images are processed to generate a live video image stream of the vehicle environment. This data processing can for example be performed by the data processing unit 4 shown in
(16) In a further step S3 the generated live video image stream is displayed on a display unit 5 of the vehicle 1, wherein cloud information data regarding a selected object are displayed in the live video image stream as an overlay in the live video image stream. The selected object of interest is selected by a driver of the vehicle 1 or a passenger located in the vehicle 1 via a user interface. The object of interest may be selected by the driver or passenger, by the driver or passenger touching the object displayed, for example a displayed building, on the display unit 5 of the camera surround view system 2. The cloud information data displayed in step S3 may be downloaded from a database of a data network, wherein local cloud information data, pertaining to the selected object, are downloaded from the data network as a function of a current position, direction of travel, and/or speed of travel of the vehicle 1, as a function of the selected object, from a database. The current position, direction of travel, and/or speed of travel are made available by a GPS unit or further sensor units of the vehicle 1 to the respective data processing unit. In a possible embodiment, a pointer from camera coordinates of the selected object, contained in the live video image stream, to GPS coordinates is calculated. Then a request is sent via a wireless connection, in order to receive the cloud information data with respect to the selected object from the database of the data network. In a possible embodiment a list of GPS coordinates and associated local cloud information data are received from the database of the data network and then a pointer from the GPS coordinates to the camera coordinates, at the position of which the local cloud information data are placed in the displayed live video image stream, is calculated by the data processing unit.
(17)
(18)
(19) Further alternative embodiments are possible. For example, the driver in a possible embodiment is made aware of objects of interest on the side of the street S, for example by an acoustic signal. For example, at a certain point on the route, for example 100 m before he passes building G3, the driver of the vehicle 1 can be made aware by an acoustic signal that in 100 m he will be passing an Italian restaurant. The driver can for example by touching the screen or the display unit 5 select the announced object. In a further possible alternative embodiment, the driver can confirm the acoustic enquiry with a command word, so that cloud information data on the object of interest are automatically downloaded, without the driver having to touch the display unit 5. In this alternative embodiment it is not necessary for the object of interest, for example the restaurant, to already be displayed on the display unit 5 at the time of selection. With the system, the object of interest, for example the restaurant shown in
(20) In a possible further alternative embodiment of the system, the driver can select or filter various types of objects. For example, the driver can select restaurants, gas stations or repair shops as an object type. In this case, in a possible embodiment the driver is simply made aware of objects corresponding to the selected object type. If for example a driver is driving through an unfamiliar city and he is looking for a suitable restaurant, he can search selectively for objects of the restaurant object type. As soon as the vehicle approaches the object of this object type, he can select this object and read the associated cloud information data, in order to make his selection.
(21) In a further possible alternative embodiment, the system has an editor, with which the driver can produce cloud information data on an object of interest, wherein the produced cloud information data are uploaded by the driver to a database 9 of the data network 8. For example, after dining at the selected restaurant a driver can give a rating for the restaurant, which can be stored as additional cloud information data in the database 9 of the data network 8.
(22) The system described herein is highly versatile. For example, the vehicle 1 can have a plurality of display units 5, provided for the various passengers of the vehicle 1. For example, a tourist bus on a city tour can be equipped with a plurality of display units 5, displaying to the vehicle passengers, e.g. the tourists, a live video image stream of the vehicle environment. If, for example, during the city tour a tourist is interested in a particular building that they see from the window of the tour bus, they can simultaneously select the building also displayed on the display unit 5 by touching the screen and receive associated cloud information data on this building. In this way a plurality of tourists can simultaneously receive information on different objects of interest to them. In a further possible alternative embodiment, the display unit 5 is a head-up display. In this embodiment the driver or a passenger of the vehicle can select an object visible from a window of the vehicle by touching the visible object on the window of the vehicle. In this embodiment the head-up display or the window thus serves as the user interface for selection of the object. The data network 8 shown in
(23) In a further possible embodiment, the loaded local cloud information data with respect to the selected objects are stored in a local data memory of the system, so that when the object is next selected they are already available locally in the local data memory of the vehicle. In this way, access times to the cloud information data with respect to previously selected objects are minimized.
(24) Based on the provided GPS data originating from the cloud, the position of the vehicle and the viewing angle of the virtual camera, the data processing unit 4 preferably calculates the point where the overlay data are displayed on the display or the display unit 5. Here the calculation preferably takes place in real time.