Method and System for Synthesizing a Lane Image
20170330043 · 2017-11-16
Inventors
Cpc classification
H04N5/2627
ELECTRICITY
G08G1/167
PHYSICS
B60R2300/00
PERFORMING OPERATIONS; TRANSPORTING
G06V20/588
PHYSICS
H04N5/2625
ELECTRICITY
International classification
H04N7/18
ELECTRICITY
H04N5/262
ELECTRICITY
Abstract
A method for synthesizing a lane image is proposed in the present application. This method includes the following steps. M continuous image frames are retrieved at a frame rate f from a video image capture device. A quantity N for image mapping is determined based on a dash length L of a dashed line and a distance S between two dashes of the dashed lines. A frame interval for mapping image frames is determined based on the dash length L, the distance S, the velocity v, and the frame rate f. At least N image frames are retrieved from the M continuous image frames at the frame interval. The at least N image frames are synthesized to obtain the lane image using an image synthesizing device.
Claims
1. A method for synthesizing a lane image, comprising: retrieving M continuous image frames at a frame rate f from a video image capture device; determining a quantity N for image mapping based on a dash length L of a dashed line and a distance S between two dashes of the dashed lines; determining a frame interval for mapping image frames based on the dash length L, the distance S, the velocity v, and the frame rate f; fetching at least N image frames from the M continuous image frames at the frame interval; and synthesizing the at least N image frames to obtain the lane image by an image synthesizing device.
2. The method as claimed in claim 1, wherein N=ceil(S/L)+1.
3. The method as claimed in claim 1, wherein the frame interval has a value ranged between ceil((f/v)(S/(N−1))) and floor((f/v)L).
4. The method as claimed in claim 1, wherein the step of synthesizing the at least N image frames to obtain the lane image includes: using an image addition algorithm to form the lane image.
5. The method as claimed in claim 1, wherein the M continuous image frames are configured to be saved in a memory buffer built in an embedded system.
6. The method as claimed in claim 1, wherein each of the M continuous image frames has an image being selected from one of the group consisting of a binary image, a gray scale image and a color image.
7. The method as claimed in claim 6, further comprising a step of: taking the union of the at least N image frames to form the lane image.
8. The method as claimed in claim 6, further comprising a step of: processing each of the at least N image frames with a max filter to form the lane image.
9. A method for real-time image synthesis from a video image capture device built on a vehicle, comprising: retrieving M continuous image frames at a frame rate f from the video image capture device built on the vehicle; determining a frame interval for mapping image frames based on a dash length L of a dashed line, a distance S between two dashes of the dashed lines, a real-time velocity v of the vehicle and the frame rate f, determining a quantity N for image mapping at least based on the dash length L and the distance S; fetching at least N image frames from the M continuous image frames at the frame interval; and synthesizing the at least N image frames to obtain a lane image by an image synthesizing device.
10. The method as claimed in claim 9, wherein N=ceil(S/L)+1.
11. The method as claimed in claim 9, wherein the frame interval has a value ranged between ceil((f/v)(S/(N−1))) and floor((f/v)L).
12. The method as claimed in claim 9, wherein each of the M continuous image frames has an image being selected from one of the group consisting of a binary image, a gray scale image and a color image.
13. The method as claimed in claim 12, further comprising a step of: taking the union of the at least N image frames to form the lane image.
14. The method as claimed in claim 12, further comprising a step of: processing each of the at least N image frames with a max filter to form the lane image.
15. A lane image synthesizing system of a vehicle, comprising: a database containing a plurality of images; and an image mapping module configured to: determine a quantity N for image mapping; determine an interval based on parameters including at least one of a velocity of the vehicle and a sampling rate of the plurality of images; fetch at least N images from the plurality of images according to the interval; and synthesize the at least N images into a lane image.
16. The lane image synthesizing system as claimed in claim 15, wherein the images are stored in frames.
17. The lane image synthesizing system as claimed in claim 15, wherein the parameters include a length of a dashed line and a distance between two dashes of the dashed lines.
18. The lane image synthesizing system as claimed in claim 15, further comprising an image processing module configured to take a procedure selected from a group consisting of regions of interest cropping and scaling, a contrast enhancement, an edge extraction, a noise reduction and a combination thereof for producing the lane image.
19. The lane image synthesizing system as claimed in claim 18, further comprising a prompting module configured to proceed: a line detection to generate a set of candidate lines; a lane determinant based on a characteristic of each of the candidate lines to identify two lane lines of the lane; a lane departure detection based on a reference line of the vehicle and the two lane lines; and popping up a warning message when the vehicle deviates from one of the reference line and the lane.
20. The lane image synthesizing system as claimed in claim 19, wherein the reference line is a side of a central area of the lane in which the center line of the vehicle is located.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0031] The present invention will now be described more specifically with reference to the following embodiments. It is to be noted that the following descriptions of preferred embodiments of this invention are presented herein for the purposes of illustration and description only; it is not intended to be exhaustive or to be limited to the precise form disclosed.
[0032] The invention is related to synthesizing images shot at different times based on a velocity of a vehicle and the regulations for lane lines to obtain an optimal image synthesizing condition as well as a stable lane detection.
[0033] Please refer to
[0034] Variables d1 and d2 both demonstrate the differences of depths of field between frames 1-2 and frames 2-3 relative to the lane 301 shot in individual frames.
[0035] One could easily find that the video image capture device 361 built on the vehicle 36 at the location P31 shoots nearly two complete lanes 303-304 with 45 degrees of view in the ROI in a depth of field of frame 1, ranged between dashed lines 351-352. However as the vehicle 36 moves to the location P32, the video image capture device 361 can only shoot a portion of the lane 303 within 45 degrees of view in the ROI in a depth of field of frame 2, which is also illustrated by d1 of lane 303′.
[0036] The method further includes steps of: computing an interval and a quantity for mapping images by referring to a dash length L of a dashed line and a distance S between two dashes of the dashed lines; retrieving ROIs of previous images from a video image capture device, such as frame number 1-3 shot in the upper part of this figure; and composing a number of images into a lane image as shown in the lower part of the figure. With the composed lane image, the present invention effectively improves the success rate for later lane detection.
[0037] Calculating a necessary count N for image mapping:
[0038] Please refer to
N.sub.least=ceil(S/L)+1 (formula I)
where ceil(x) is a function of x which maps the least integer that is greater than or equal to x, N.sub.least represents a least quantity for image mapping, L represents a dash length of a dashed line and S represents a distance between two dashes of the dashed lines, as well as the necessary count N shall be no less than the least quantity for image mapping N.sub.least, such as in formula II:
N≧N.sub.least (formula II)
[0039] Calculating a frame interval for mapping image frames:
[0040] In order to compose dashed lines cropped from the ROI of the frames into a straight line, a moving distance d of the vehicle between two frames should be within the following range between (S/(N−1)) and L as the formula III:
[0041] In addition, it is found that there is a relationship among the time t, the velocity v of the vehicle, the distance d between dashed lines of the lane, the frame interval m for mapping image frames and a frame rate (sampling rate) f for a number of continuous image frames, such as frame 1, frame 2, frame 3 . . . , as in formula IV:
[0042] The formula IV can be further formatted as formula V:
[0043] Because the frame interval for mapping image frames must be an integer, the functions of floor and ceiling of the frame interval m can be calculated as in the following inequality VI:
where floor(x) is a function of x which maps the greatest integer that is less than or equal to x, N.sub.least is the minimal integer among all of the necessary count N for image mapping.
[0044] Please note that there may be a variety of combination as the necessary count N and the frame interval m both satisfy formula II and inequality VI. However for the sake of reduced noise in the further steps for image mapping, the necessary count N and the frame interval m with less values are preferred in the embodiments.
[0045] Please refer to
[0046] Lane image synthesis:
[0047] After the necessary count N and frame interval m for mapping image frames are calculated, at least N image frames retrieved from certain continuous image frames at the frame interval m are fetched. If each of the image frames belongs to a binary image, one should take the union of the at least N image frames to form the lane image. If each of the image frames belongs to a gray scale image or a color image, one should consider a Max function or an addition algorithm for said image frames to form the lane image.
[0048] Please refer to
[0049] Thus a lane image synthesizing system implemented with this invention would fetch at least N=4 images from the plurality of frames according to the frame interval m, say frame numbers as F, F-1m, F-2m and F-3m. The lanes 601-602 shot in the individual frames (frame number F, F-m, F-2m and F-3m) are all superimposed and rendered in this figure by referring to the relative position among the lanes 601-603, the vehicle 66 and a sun 600 in the sky.
[0050] Afterwards, the video image capture device synthesizes the at least N=4 images illustrated in the left four squares in the lower part of the figure. And then the fragments of lanes 602 and 603 shown in the left four squares are composed into a lane image shown in the right most square as a mapping result. The lane image is then processed with a lane detection and a lane departure detection to recognize the position of the lane.
[0051] More specifically, whenever the vehicle 66 deviates from one of the reference line and the lane, there will be a warning message pop-up for the driver.
[0052] Please refer to
[0053] A video image capture device shoots the scenes of the road as a source image (step S701), and stores each image frame in a memory buffer (step S702), wherein each image frame has an image being selected from one of the group consisting of a binary image, a gray scale image and a color image depending on the type of video image capture device.
[0054] Afterwards, an optimal calculator for image mapping and another optimal calculator for a frame interval using in mapping image frames are applied to generate a quantity N for image mapping and a frame interval m for mapping image frames according to regulations for lane lines, a frame rate f and a real-time velocity v of a vehicle (step S703-S705).
[0055] Afterwards, at least N image frames are fetched from a number of image frames retrieved from the memory buffer; and the at least N image frames are used to obtain the lane image using an image synthesizing device (step S706). Whenever the source image belongs to a binary image, a further step of taking the union of the at least N image frames to form the lane image will be performed. In another example, if the source image belongs to a gray scale image or a color image, a Max function for said image frames to form the lane image would be chosen. In addition, other image operators could be used in said image frames with gray scale pixels, such as a Sobel filter.
[0056] The image synthesizing device can be built on an embedded system or any other portable information platform. These portable information platforms, such as mobile phones, PDAs, pagers, etc., are typically based on an embedded controller that integrates a microprocessor and a set of system and application programs in the same device. Presently, a virtual machine, such as Java Virtual Machine (JVM) or Microsoft Virtual Machine (MVM) is integrated to the embedded system as a cross-platform foundation for the running of application programs on the information platform.
[0057] In step S707, once the lane image is completed, an image processing or prompting can be conducted based on a well-defined lane image as a destination image.
[0058] Please refer to
[0059] Please refer to
[0060] The idea of the image mapping module 901 is similar to the embodiment in
[0061] A necessary count for image mapping and a frame interval corresponding to a specific velocity of a vehicle are calculated via the image mapping calculator 9011 and the frame interval calculator 9012. Thus the image mapping calculator 9011 determines a least quantity N.sub.least for image mapping while the frame interval calculator 9012 determines a quantity Nand the frame interval based on parameters including at least one of the velocity of the vehicle and a sampling rate of the plurality of images 900, said 30 frames per second among these continuous images. The image mapping module 901 can obtain a velocity value, a length value, a distance value and a sampling rate value. The velocity value, the length value, the distance value and the sampling rate value respectively represent the velocity v, the dash length L, and the distance S and the sampling rate (or a frame rate). For example, the frame interval is determined based on the velocity value, the length value, the distance value and the sampling rate value.
[0062] In this example, the plurality of images 900 could be stored in frames. However the plurality of images 900 could also be viewed as a stream and be stored in a multidimensional way.
[0063] In another example, the parameters used in the frame interval calculator 9012 may further include a length of a dashed line and a distance between two dashes of the dashed lines.
[0064] The image mapping module 901 in
[0065] The image processing module 902 includes a ROI cropping and scaling module 9021, a contrast enhancement module 9022, an edge extraction module 9023 and a noise reduction module 9024. The image processing module 902 is configured to perform at least a procedure selected from a group consisting of regions of interest (ROI) cropping and scaling implemented by the ROI cropping and scaling module 9021, a contrast enhancement implemented by the contrast enhancement module 9022, an edge extraction implemented by the edge extraction module 9023, a noise reduction implemented by the noise reduction module 9024 and a combination thereof for producing the lane image.
[0066] For example, the ROI cropping and scaling module 9021 can change the image shape while scaling maintains the morphology of the object in the image and does not change the image pixels in any way. The contrast enhancement module 9022 changes the image value distribution to cover a wide range for the ease of human vision. An edge extraction technique is to extract the skeleton of the object in the image, such as the lines of the lane.
[0067] The prompting module 903 includes a line detection module 9031, a lane determinant module 9032 and a lane departure determinant module 9033. The prompting module 903 is configured to perform: a line detection to generate a set of candidate lines implemented by the line detection module module 9031; a lane determinant based on a characteristic of each of the candidate lines to identify two lane lines of the lane, such as the distribution of the lines in the image implemented by the lane determinant module 9032.
[0068] The prompting module 903 can further take a lane departure determinant based on a reference line of the vehicle and the two lane lines implemented by the lane departure determinant module 9033.
[0069] The message generation module 904 can pop up a warning message when the vehicle deviates from one of the reference line and the lane.
[0070] The image mapping module 901, the image processing module 902 and the prompting module 903 can be implemented by an embedded system or another kind of electron device if necessary.
[0071] Please refer to
[0072] The image processing module 902 and the prompting module 903 could be conducted, so that a well-defined lane image is formed.
[0073] Please refer to
[0074] In the image mapping module 1102, there is a process to calculate a least quantity N least for image mapping using an image mapping calculator 11021, and a frame interval calculator 11022 is responsible for another process for a table NLUT and a table mLUT corresponding to different velocities of a vehicle. The table NLUT includes a list of possible quantities for image mapping. The table mLUT is established according to a plurality of velocity values, a quantity N for image mapping and a plurality of intervals for mapping image, wherein the plurality of intervals are calculated based on the quantity N, a dash length L of a dashed line, a distance S between two dashes of the dashed line and the plurality of velocity values. And the at least N image frames with an interval between two continuous frames at the time scale are used to obtain a lane image using an image composer 11024. These two processes can be conducted only once and calculated in advance, thus it can increase the efficiency of the present invention.
[0075] Please refer to
[0076]
[0077] Please refer to
[0078]
[0079] In short, the present invention is related to a process of connecting dashed lines with a number of image frames separated by a frame interval. Thus dashed lane lines can be connected, followed by a lane detection, especially when it deals with the problem of dashed lines.
[0080] In order to effectively detect the dashed lines, the following information about a velocity of a vehicle, a quantity for image mapping and a frame interval for mapping image frames is needed. In contrast to the prior art, this invention can be applied in a driving record, applicable to images configured to the front part of the vehicle. It is simple and more reliable without complex algorithms, and it will not require a substantial amount of system memory.