Virtual overlay system and method for occluded objects
11288785 · 2022-03-29
Assignee
Inventors
Cpc classification
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06F17/00
PHYSICS
B60R2300/10
PERFORMING OPERATIONS; TRANSPORTING
G08G1/09626
PHYSICS
G08G1/096716
PHYSICS
G02B2027/0141
PHYSICS
G08G1/09623
PHYSICS
G09G2340/12
PHYSICS
G08G1/09675
PHYSICS
G06V20/647
PHYSICS
B60R2300/30
PERFORMING OPERATIONS; TRANSPORTING
G08G1/096775
PHYSICS
G09G2320/0261
PHYSICS
B60R2300/70
PERFORMING OPERATIONS; TRANSPORTING
International classification
G08G1/0962
PHYSICS
G08G1/0967
PHYSICS
G06F17/00
PHYSICS
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06T19/00
PHYSICS
Abstract
The present invention provides a system and method of displaying a representation of a road sign. The method comprises receiving information associated with the road sign, the information associated with the road sign comprising a front face of the road sign and a location of the road sign in a road system; receiving an image from a camera, the road sign being within the field of view of the camera; determining, using the image, an amount of the road sign that is obstructed; generating, in dependence on the amount obstructed, the representation of the road sign using the information associated with the road sign; and outputting the representation of the road sign for display.
Claims
1. A computer-implemented method of displaying a representation of a road sign, the method comprising: receiving information associated with the road sign, the information associated with the road sign comprising a front face of the road sign and a location of the road sign in a road system; receiving an image from a camera of a vehicle, the road sign being within the field of view of the camera and the image being one of a sequence of images from the camera; determining, using the image, an amount of the road sign that is obstructed comparing sequential images from the camera; generating, based on the amount of the road sign that is obstructed, the representation of the road sign using the information associated with the road sign; and outputting the representation of the road sign for display on a display of the vehicle.
2. The computer-implemented method of claim 1, wherein the information associated with the road sign is received from the camera and the representation is generated based on the image and the location from the camera.
3. The computer-implemented method of claim 1, wherein the information associated with the road sign is received from a remote server and stored in a database, and the representation is generated based on the information in the database.
4. The computer-implemented method of claim 1, wherein the display is a head-up display.
5. The computer-implemented method of claim 4, comprising outputting the representation of the road sign on the head-up-display to be overlaid over the position of the road sign when viewed by a user.
6. The computer-implemented method of claim 1, wherein the camera comprises a wide angle lens.
7. The computer-implemented method of claim 1, wherein the information associated with the road sign is determined by the sign detection module and the representation is generated based on the determined information, the sign detection module is further arranged to determine the location and orientation of any detected road signs relative to the vehicle.
8. A non-transitory computer-readable storage medium comprising computer-readable instructions for a computer processor to carry out the computer-implemented method of claim 1.
9. A virtual overlay system for displaying a representation of a road sign, within a vehicle, the system comprising: an input arranged to receive: information associated with the road sign, the information associated with the road sign comprising a front face of the road sign and a location of the road sign in a road system; and an image from a camera of the vehicle, the road sign being within the field of view of the camera and the image being one of a sequence of images from the camera; a processor arranged to: determine, using the image, an amount of the road sign that is obstructed 11y comparing sequential images from the camera; and generate, based on the amount of the road sign that is obstructed, the representation of the road sign using the information associated with the road sign; and an output arranged to output the representation of the road sign for display on a display of the vehicle.
10. The system of claim 9, wherein the information associated with the road sign is received from the camera, and the representation is generated based on the image and the location from the camera.
11. The system of claim 9, wherein the information associated with the road sign is received from a remote server and stored in a database, and the representation is arranged to be generated based on the information in the database.
12. The system of claim 9, wherein the display is a head-up-display.
13. The system of claim 12, wherein the representation of the road sign is arranged to be output to the head-up-display to be overlaid over the position of the road sign when viewed by a user of the system.
14. The system of claim 9, wherein the camera comprises a wide angle lens.
15. The system of claim 9, wherein the information associated with the road sign is determined by the sign detection module and the representation is generated based on the determined information, the sign detection module is further arranged to determine the location and orientation of any detected road signs relative to the vehicle.
16. A vehicle comprising the system of claim 9.
17. An electronic processor comprising: an electrical input for receiving signals comprising: information associated with the road sign, the information associated with the road sign comprising a front face of the road sign and a location of the road sign in a road system; and image information from a camera of a vehicle, the road sign being within the field of view of the camera and the image information being one of a sequence of images from the camera; and an electronic memory device electrically coupled to the electronic processor and having instructions stored therein; wherein the processor is configured to: access the memory device and execute instructions stored therein such that it is operable to determine, using the image information, an amount of the road sign that is obstructed by comparing sequential images from the camera; generate, based on the amount obstructed, the representation of the road sign using the information associated with the road sign; and output the representation of the road sign for display on a display of the vehicle.
18. A vehicle comprising the electronic processor of claim 17.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4) One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
DETAILED DESCRIPTION
(17)
(18) The camera 208 and the display 212 are each operatively connected to the overlay system 210. The display 212 is a head-up display. In other embodiments, the camera 208 and/or display 212 are wirelessly connected to the overlay system 210.
(19) The camera 208 has a field of view depicted by the lines 214, which is substantially the same as a field of view as the driver. The road sign 204 is ahead of the vehicle 206 and is in the field of view 214 of the camera 208.
(20) The overlay system 210 is arranged to continuously monitor the road system 200 using the camera 208 to detect road signs. If any road signs (e.g. road sign 204) are detected, then information associated with the road signs are stored in the sign database 256 (as shown in
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28) The camera input module 252 is arranged to receive input from the camera 208. The camera 208 is arranged to capture images of the road system 200 at between 24 to 240 frames per second. The camera 208 may be a stereoscopic camera.
(29) The sign detection module 254 is arranged to perform object recognition analysis on the images received by the camera 208 to determine the presence of road signs in the images. For example, the sign detection module 254 may be configured to detect road signs by: shape (e.g. circular, triangular or rectangular); predominant colour (e.g. yellow, green, blue or brown); and/or border colour (e.g. red or white).
(30) The sign detection module 254 is further arranged to determine the location and orientation of any detected road signs relative to the vehicle 206.
(31) In other embodiment the camera 208 is arranged to perform the functionality of the sign detection module 254. In such embodiment, the camera 208 is arranged to perform object recognition analysis on the image captured to determine the presence of road signs in the images. The camera 208 is further arranged to determine the location and/or orientation of any detected road signs relative to the vehicle 206. The camera 208 output comprises one or more of: the image data, the location data and the orientation data.
(32) Any road signs detected in the road system 200, for example road sign 204, are cropped from the camera image and stored in the memory 256. The cropped image comprises the front face of the road sign 204. The road signs are removed from the memory 256 after a predetermined length of time or when the storage space for the memory 256 is full. A representation of the front face of the road sign 204 may be created from the cropped image of the road sign 204. Then the memory may store a representation of the front face of the road sign 204, the location and/or orientation data in association with the road sign 204.
(33) The occlusion detection module 258 is arranged to compare sequential images from the camera 208 which have been identified as comprising a road sign in order to detect whether less of the road sign is visible in subsequent images.
(34) The overlay output module 260 is arranged to determine and output representations of signs stored in the memory 256 to the display 212. The overlay output module 260 may be arranged to output the representation based on the determined location and orientation of the road sign being displayed.
(35)
(36) The sign detection module 254 checks at Step 306 whether any road signs were detected in the image. If one or more road signs are detected, the occlusion detection module 258 checks at Step 308 whether road signs were detected in the previous image received from the camera 208. If no road signs were previously detected, then the sign detection module 254 extracts (i.e. crops) any road signs from the current image and stores at Step 310 any extracted road signs in the memory 256. Following this, the process 300 returns to Step 302.
(37) If following the check at Step 306 whether any road signs were detected in the image, no road signs are detected, then the occlusion detection module 258 checks at Step 312 whether road signs were detected in the previous image received from the camera 208. The occlusion detection module 258 takes into account the movement of the vehicle 206 (forwards and lateral) and the locations of any road signs relative to the vehicle 206 determined at Step 304, in order to determine whether the vehicle 206 has moved past the location of the sign. In other words, the occlusion detection module 258 checks whether the road sign is no longer expected to be in view of the camera 208.
(38) If following the check at Step 312, no road signs were previously detected, the process 300 returns to Step 302. However, if following the check at Step 312 road signs were previously detected and the vehicle 206 has not travelled past the road sign, this means that the road sign has now been obstructed from view of the camera 208. Accordingly, the overlay output module 260 retrieves at Step 314 the previously detected road sign from the memory 256. Then the overlay output module 260 determines the size, orientation and location of where the sign is expected to be. The overlay output module 260 also determines a representation of the road sign corresponding to the expected size, orientation and location of the road sign, and outputs at Step 316 the representation of the road sign for display on the display 212. Following this, the process 300 returns to Step 302. The effect of this is that, as the vehicle 206 approaches the location of the road sign 204, the representation of the road sign increases in size.
(39) Returning to the check at Step 308, if the outcome of the check is that one or more road signs were detected in the previous image received from the camera 208, then the occlusion detection module 258 determines at Step 318 whether the full front face of the road sign is visible, or if only part of the road sign is visible. The occlusion detection module 258 takes into account the movement of the vehicle 206 (forwards and lateral) and the locations of any road signs relative to the vehicle 206 determined at Step 304, in order to determine whether the vehicle 206 has begun to move past the location of the sign. In other words, the occlusion detection module 258 checks whether the full front face of the road sign is no longer expected to be in view of the camera 208.
(40) If following the check of Step 318, the full front face of the road sign is visible, then there is no occlusion or obstruction, and the process 300 returns to Step 302.
(41) However, if following the check of Step 318 only a part of the road sign is visible, this is indicative that the road sign is being partially blocked from the field of view of the camera 208. Accordingly, the overlay output module 260 retrieves at Step 320 the previously detected road sign from the memory 256. Then the overlay output module 260 determines the size, portion, orientation and location of where the sign is expected to be. The overlay output module 260 also determines a representation of the road sign corresponding to the expected size, portion, orientation and location of the road sign, and outputs at Step 322 the representation of the road sign for display on the display 212. Following this, the process 300 returns to Step 302.
(42) In an alternative embodiment, an overlay system comprises location determining means (e.g. a GPS receiver) and a database storing information associated with a set of road signs (the information associated with the road signs includes the front face 216 of the road sign and the location of the road sign in a road system 200). The overlay system is arranged to determine the location of the vehicle using the location determining means and, accordingly, the road signs that should be in the field of view of the camera 208. The overlay system is arranged to determine if the road signs that should be in the field of view of the camera 208 are blocked, and outputs a representation of the road sign in response.
(43)
(44) The camera input module 404 is arranged to receive input from the camera 208. The vehicle 206 further comprises an internal camera 418 connected to the camera input module 404. The internal camera 418 is directed at the driver of the vehicle, and may be focussed on the driver's face.
(45) The sign database 408 comprises information associated with a set of road signs (the information associated with the road signs includes the front face 216 of the road sign and the location of the road sign in a road system 200). The set of road signs may be all the road signs in a geographic area, or the road signs along a route along which the vehicle is travelling. As shown in
(46) The GPS receiver 412 is arranged to determine the location of the vehicle 206 by receiving timing information from GPS satellites as is known in the art. In other embodiments, the location of the vehicle may be determined by other means such as cellular network multilateration as is known in the art.
(47) The sign detection module 406 is substantially the same as the sign detection module 254 and is arranged to perform object recognition analysis on the images received by the camera 208 to determine the presence of road signs in the images. The sign detection module 406 is further arranged to determine the location of any detected road signs relative to the vehicle 206 using information from the GPS receiver 412 to determine road signs that are expected to be in the field of view of the camera 208.
(48) The occlusion detection module 414 is substantially the same as the occlusion detection module 258 and is arranged to compare sequential images from the camera 208 which have been identified as comprising a road sign in order to detect whether less of the road sign is visible in subsequent images.
(49) The overlay output module 410 is substantially the same as the overlay output module 260 and is arranged to determine and output representations of signs stored in the sign database 408 to the display 212.
(50) The gaze detection module 416 is arranged to determine where the driver is looking, and what the driver is looking at. For example, taking the following 3D coordinates:
(51) Object Viewed by Driver (e.g. a road sign): O=[x.sub.O, y.sub.O, z.sub.O]
(52) Driver's Head: D=[x.sub.D, y.sub.D, z.sub.D]
(53) Forward Facing Camera: C=[x.sub.C, y.sub.C, z.sub.C]
(54) And the following vectors:
(55) DO.fwdarw.=[O−D]
(56) Determine eye gaze vector for the driver via the internal camera 418 based on eye gaze direction and head position.
(57) DC.fwdarw.=[D−C]
(58) Both the Driver's Head and the camera 208 are in the same coordinate system.
(59) CO.fwdarw.=[O−C]
(60) This vector shows the path between the camera 208 and the object viewed by the Driver. DO.fwdarw. is outputted by the internal camera 418 and the camera 208 is able to position this in its 3D model.
(61) To determine the distance to the object, the internal camera 418 uses eye vergence to provide an estimate of distance. The camera 208 is stereoscopic to estimate target distance and is used for closest match to eye vergence. The object detected by the camera 208 that is intercepted or nearest to the gaze vector is considered the object of visual interest.
(62) If there is a head movement of the driver, then the internal camera 418 recalculates the new co-ordinates of the eye position (since the camera 208 and the internal camera 418 are of fixed positions). As the vehicle moves, the relative position of the object also changes. The gaze detection module 416 tracks the new coordinates as the vehicle moves. The gaze detection module 416 can calculate the spatial co-ordinates of the object (e.g. road sign), which is intercepted by the gaze vector.
(63)
(64) If following the check of Step 454, there are expected to be road signs in the field of view of the camera 208, then the camera input module 404 receives at Step 456 an image (i.e. a video frame) from the camera 208. The occlusion detection module 414 determines at Step 458 whether the road signs that are expected to be in the field of view of the camera 208 are visible. The occlusion detection module 414 checks at Step 460 whether the full front face of the road sign is visible, or if none or only part of the road sign is visible. If the full front face of the road sign is visible, then the process 450 returns to Step 452.
(65) However, if following the check of Step 460, none or only part of the road sign is visible to the camera 208, then the overlay output module 410 retrieves at Step 462 the road sign from the sign database 408. Then the overlay output module 410 determines the size, portion, orientation and location of where the sign is expected to be. The overlay output module 410 also determines a representation of the road sign corresponding to the expected size, portion, orientation and location of the road sign, and outputs at Step 464 the representation of the road sign for display on the display 212. Following this, the process 450 returns to Step 452.
(66) It will be appreciated that the functionality of camera input module 252; 404, the sign detection module 254; 406, the overlay output module 260; 410, the occlusion detection module 258; 414 and the gaze detection module 416 may be performed by one or more processors. When performed by multiple processors the processors may be located independently from each other.
(67) It will be appreciated that road sign comprises static signs and active signs having changeable content where the content may be changed over time. Active signs may be car parking information, road work and traffic information signs.
(68) Many modifications may be made to the above examples without departing from the scope of the present invention as defined in the accompanying claims.
(69) For example, if a road sign is fully obscured, then the representation of the road sign on the display 212 may be at any orientation and location, and not necessarily match the actual orientation and location of the road sign.
(70) In other embodiments, the display 212 may be a display in a centre console of the vehicle or a display in the dashboard of the vehicle.
(71) Optical character recognition may be used to detect words and place names from the signs.