Lane detection apparatus and method and electronic device
11126867 · 2021-09-21
Assignee
Inventors
Cpc classification
G06V20/588
PHYSICS
G06V10/22
PHYSICS
G06V10/247
PHYSICS
International classification
Abstract
Embodiments of this disclosure provide a lane detection apparatus and method and an electronic device. First, preprocessing is performed based on semantic segmentation to remove interference objects in a binary image, which may improve accuracy of a lane line detection and may be applicable to various road scenarios, and the result based on semantic segmentation may automatically extract a lane line region image containing one or more lane lines, thereby automatically performing perspective transformation to perform search and fitting on the one or more lane lines, including achieving multi-lane detection. And furthermore, by synthesizing detection results of a plurality of input images, accuracy and integrity of the lane detection may further be improved.
Claims
1. An apparatus, comprising: a processor to couple to a memory and to, detect a plurality of input images to obtain respective lane line detection results indicating one or more lane lines in the input images; and obtain a synthetical lane line detection result according to the obtained one or more lane lines in the input images corresponding to the respective lane line detection results, wherein to obtain the one or more lane lines of an input image among the input images, the processor is to, determine regions of objects of various types in the input image based on a semantic segmentation; remove interference objects from a binary image of the input image according to the regions of objects of various types in the input image, to obtain a preprocessed binary image; extract a lane region image from the preprocessed binary image according to the regions of objects of various types in the input image; perform a perspective transformation on the extracted lane region image to obtain an overhead-view image; perform a search of one or more lane lines and a fitting on the overhead-view image to determine one or more lane lines in the overhead-view image; and perform a perspective inverse transformation on the overhead-view image with the one or more lane lines in the overhead-view image, to obtain the one or more lane lines in the input image.
2. The apparatus according to claim 1, wherein to obtain the preprocessed binary image, the processor is to: obtain the binary image of the input image; and remove the interference objects from the binary image according to locations of the interference objects in the regions of objects of various types.
3. The apparatus according to claim 1, wherein to extract the lane region, the processor is to: extract images of regions where one or more lane lines in the preprocessed binary image are located according to locations of one or more lane lines in the regions of objects of various types; and correct an image among the extracted images to obtain the lane region image.
4. The apparatus according to claim 1, wherein to perform the search and fitting to determine the one or more lane lines in the overhead-view image, the processor is to: calculate a histogram of accumulated pixel values in the overhead-view image; determine a number of one or more lane lines in the overhead-view image according to a number of waveforms in the histogram; search the one or more lane lines in the overhead-view image by using a sliding window, and detect pixels of non-zero values; and fit the detected pixels of non-zero values to determine the one or more lane lines in the overhead-view image.
5. The apparatus according to claim 1, wherein the processor is to: determine lengths of line segments and/or lengths of intervals between the line segments according to a lane line among the one or more lane lines in the overhead-view image; and determine a type of the lane line according to the lengths of line segments and/or the lengths of intervals between the line segments.
6. The apparatus according to claim 5, wherein the processor is to, determine the type of the lane line as a solid lane line when the lengths of line segments and/or the lengths of intervals between the line segments satisfy any one or combination of conditions including, a ratio of a longest line segment to a shortest line segment in the lane lines being greater than a first threshold; a length of the longest line segment in the lane lines being less than a second threshold; or an average value of the intervals between the line segments in the lane lines being less than a third threshold; or determine the type of the lane line as a dotted lane line.
7. The apparatus according to claim 1, wherein, the processor is to synthesize the respective lane line detection results of the input images where all detectable number of lane lines are detected in an input image among the input images, to obtain the synthetical lane detection result.
8. The apparatus according to claim 5, wherein, the processor is to determine the type of the lane line according to proportions of types of one or more lane lines respectively determined in overhead-view images corresponding to the input images.
9. An electronic device, comprising the apparatus as claimed in claim 1.
10. A lane detection method, comprising: by a processor, detecting a plurality of input images to obtain respective lane line detection results indicating one or more lane lines in the input images; and obtaining a synthetical lane line detection result according to the obtained one or more lane lines in the input images corresponding to the respective lane line detection results; wherein, to obtain the one or more lane lines of an input image among the input images, the processor is to, determining regions of objects of various types in the input image based on a semantic segmentation; removing interference objects from a binary image of the input image according to the regions of objects of various types in the input image, to obtain a preprocessed binary image; extracting a lane region image from the preprocessed binary image according to the regions of objects of various types in the input image; performing a perspective transformation on the extracted lane region image to obtain an overhead-view image; performing a search of one or more lane lines and a fitting on the overhead-view image, to determine one or more lane lines in the overhead-view image; and performing a perspective inverse transformation on the overhead-view image with the one more lane lines in the overhead-view, to obtain the one ore more lane lines in the input images.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The drawings are included to provide further understanding of this disclosure, which constitute a part of the specification and illustrate the preferred embodiments of this disclosure, and are used for setting forth the principles of this disclosure together with the description. It is obvious that the accompanying drawings in the following description are some embodiments of this disclosure, and for those of ordinary skills in the art, other accompanying drawings may be obtained according to these accompanying drawings without making an inventive effort. In the drawings:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
DETAILED DESCRIPTION
(21) These and further aspects and features of this disclosure will be apparent with reference to the following description and attached drawings. In the description and drawings, particular embodiments of the disclosure have been disclosed in detail as being indicative of some of the ways in which the principles of the disclosure may be employed, but it is understood that the disclosure is not limited correspondingly in scope. Rather, the disclosure includes all changes, modifications and equivalents coming within the terms of the appended claims.
First Embodiment
(22) The embodiment of this disclosure provides a lane detection apparatus.
(23) a detecting unit 110 configured to respectively detect a plurality of input images to obtain lane detection results of the input images; and
(24) a synthesizing unit 120 configured to obtain a synthetical lane detection result according to the lane detection results of the input images;
(25) wherein the detecting unit 110 includes:
(26) a segmenting unit 111 configured to determine regions of objects of various types in an input image based on semantic segmentation;
(27) a preprocessing unit 112 configured to remove interference objects in a binary image of the input image according to the regions of objects of various types in the input image to obtain a preprocessed binary image;
(28) an extracting unit 113 configured to extract a lane region image from the preprocessed binary image according to the regions of objects of various types in the input image;
(29) a transforming unit 114 configured to perform perspective transformation on an extracted lane region image to obtain an overhead-view image;
(30) a searching unit 115 configured to perform lane line search and fitting on the overhead-view image to determine lane lines in the overhead-view image; and
(31) an inverse transforming unit 116 configured to perform perspective inverse transformation on the overhead-view image with lane lines being determined, and determine lane lines of the input image according to a result of the perspective inverse transformation to obtain the lane detection results of the input image.
(32) It can be seen from the above embodiment that first, preprocessing is performed based on semantic segmentation to remove interference objects in a binary image, which may improve accuracy of the lane detection and may be applicable to various road scenarios, and the result based on semantic segmentation may automatically extract a lane region image containing all the lanes, thereby automatically performing perspective transformation to perform search and fitting on the lanes and achieving multi-lane detection. And furthermore, by synthesizing detection results of a plurality of input images, accuracy and integrity of the lane detection may further be improved.
(33) In this embodiment, the detecting unit 110 respectively detects the plurality of input images to obtain lane detection results of the input image. In this embodiment, for example, the plurality of input images are a plurality of consecutive frames in a surveillance video capturing the same scene, or a plurality of frames at intervals in the surveillance video.
(34) In this embodiment, the number of the plurality of input images may be set as actually demanded.
(35) In this embodiment, the detecting unit 110 respectively detects each input image in the input images, and an order of detection is not limited herein.
(36) Detection methods of the detecting unit 110 for each input image are identical. Hereinafter, a method of the detecting unit 110 for detecting one of the plurality of input shall be described below.
(37) As shown in
(38) In this embodiment, an existing method, such as semantic segmentation based on DeepLab V3, may be used by the segmenting unit 111 in performing the semantic segmentation.
(39)
(40) In this embodiment, the preprocessing unit 112 removes the interference objects in the binary image of the input image according to the segmentation result of the segmenting unit 111 performed on the input image, that is, according to the regions of the objects of the types in the input image, and obtains the preprocessed binary image.
(41)
(42) a binarizing unit 401 configured to obtain the binary image of the input image; and
(43) a removing unit 402 configured to remove interference objects from the binary image according to the regions where the interference objects in objects of various types are located, to obtain the preprocessed binary image.
(44) In this embodiment, the binarizing unit 401 may obtain the binary image in an existing method.
(45)
(46) The removing unit 402 removes the interfering object from the binary image according to the segmentation result of the segmenting unit 111 performed on the input image, that is, according to the regions of the interference objects in the objects of the types, to obtain the preprocessed binarized image.
(47)
(48) In this embodiment, types of objects included in the interference objects needing to be removed may be determined according to an actual application scenario, as long as objects that may affect the detection of the lane lines may be used as the interference objects. In this embodiment, the interference objects include, for example, a vehicle moving on a lane, a green belt between roads, and a street lamp.
(49) In this embodiment, the extracting unit 113 extracts the lane region image from the preprocessed binary image according to the segmentation result of the segmenting unit 111 performed on the input image, that is, according to the regions of the objects of the types in the input image.
(50) In this embodiment, a shape of the extracted lane region image may be determined according to a requirement of the transforming unit 114 in performing the perspective transformation, for example, the extracted lane region image is of a trapezoidal shape.
(51)
(52) a first extracting unit 701 configured to extract images of regions where lane lines in the preprocessed binary image are located according to regions where lane lines in the objects of various types are located; and
(53) a correcting unit 702 configured to correct the image of the regions where the lane lines are located to obtain the lane region image.
(54)
(55) The correcting unit 702 corrects the image of the region where the lane is located extracted by the first extracting unit 701, such as expanding the shape of the extracted image and supplementing it into a complete trapezoidal shape.
(56) After the extracting unit 113 extracts the lane region image, the transforming unit 114 performs perspective transformation on the extracted lane region image to obtain the overhead-view image.
(57) In this embodiment, the transforming unit 114 may perform the perspective transformation in an existing method.
(58) For example, the perspective transformation may be performed according to the following formulae (1) and (2):
(59)
(60) where, (u, v) is coordinates of pixels in the lane region image, (x=x′/w′, y=y′/w′) is the coordinates of the pixels in the overhead image, w′ and w are coordinate transformation coefficients, T is a perspective transformation matrix, and a.sub.11, . . . , a.sub.33 are elements of the perspective transformation matrix.
(61)
(62) After obtaining the overhead-view image by the transforming unit 114, the searching unit 115 performs lane line searches and fitting on the overhead-view image to determine the lane lines in the overhead-view image. A structure of the searching unit 115 and a search method shall be described below by way of examples.
(63)
(64) a calculating unit 1101 configured to calculate a histogram of accumulated pixel values in the overhead-view image;
(65) a first determining unit 1102 configured to determine the number of lane lines in the overhead-view image according to the number of waveforms in the histogram;
(66) a first searching unit 1103 configured to search the lane lines in the overhead-view image by using a sliding window, and detect pixels of non-zero values; and
(67) a fitting unit 1104 configured to fit the detected pixels of non-zero values to determine the lane lines in the overhead-view image.
(68) In this embodiment, the calculating unit 1101 calculates the histogram of the accumulated pixel values of the overhead-view image.
(69) The first determining unit 1102 determines the number of lane lines in the overhead-view image according to the number of waveforms in the histogram, that is, the number of waveforms in the histogram corresponds to the number of lane lines in the overhead-view image. In addition, waveforms with amplitudes less than a preset threshold may be removed, so as to remove interference of noises.
(70) For example, as shown in
(71) The first searching unit 1103 searches the lane lines in the overhead-view image by using the sliding window to detect the pixels of non-zero values.
(72) In this embodiment, the first searching unit 1103 respectively searches the lane lines, and search processes for searching the lane lines may be performed in parallel or one by one, and a search sequence of the lane lines is not limited in the embodiment of this disclosure. A method for searching a lane line by the first searching unit 1103 shall be illustrated below.
(73) For example, in searching for a lane line, the first searching unit 1103 first determines a starting point of the search, such as taking a pixel point of the histogram corresponding to a peak value of a waveform of the lane line as the starting point, then, moves the sliding window up and down to perform grid search, to determine all pixel points of non-zero values in a searched region. For example, an average coordinate of all pixel points of non-zero values detected in the sliding window are taken as a starting point of a next sliding window to be moved.
(74) In this embodiment, a size of the sliding window may be determined according to an actual situation. For example, a width of the sliding window is a predetermined proportion of a width of the overhead-view image, such as 1/10, and a height of the sliding window is a height of the overhead-view image divided by the number of times of movement of the sliding window.
(75) The fitting unit 1104 performs fitting on the pixels of non-zero values detected by the first searching unit 1103 to determine the lane lines in the overhead-view image.
(76) For example, the fitting unit 1104 performs fitting on all pixels of non-zero values detected on the lane line by using a second-order polynomial to obtain parameters of fitted curves of the lane line, thereby determining a shape of the lane line.
(77)
(78) In this embodiment, the searching unit 115 determines the shapes and positions of the lane lines in the overhead-view image via the detected pixels of non-zero values. In addition, according to the lane lines determined by the searching unit 115, types of the lane lines may also be determined.
(79) For example, as shown in
(80) a second determining unit 117 configured to determine lengths of line segments and/or lengths of intervals between the line segments according to determined lane lines in the overhead-view image; and
(81) a third determining unit 118 configured to determine types of the lane lines according to the lengths of line segments and/or the lengths of intervals between the line segments in the lane lines.
(82) In this embodiment, the types of the lane lines are, for example, solid lines and dotted lines.
(83) In this way, the types of the lane lines may be detected, hence, solid lines and dotted lines may be distinguished.
(84) In this embodiment, the second determining unit 117 determines the lengths of the line segments and/or the lengths of intervals between the line segments according to the determined lane lines in the overhead-view image.
(85) For example, the searching unit 115 determines the lane lines in the overhead-view image via the detected pixels of non-zero values, then continuous pixels of non-zero values constitute a line segment, hence, the lengths of the line segments may be determined. And furthermore, the lengths of the intervals between adjacent line segments may also be determined.
(86) The third determining unit 118 determines the types of the lane lines according to the lengths of the line segments and/or the lengths of the intervals between the line segments in the lane lines. For example, the third determining unit determines the lane lines as solid lines when the lengths of line segments and/or the lengths of intervals between the line segments satisfy at least one of the following conditions, otherwise, determines the lane lines as dotted lines: a ratio of a longest line segment to a shortest line segment in the lane lines being greater than a first threshold L1; a length of the longest line segment in the lane lines being less than a second threshold L2; and an average value of all the intervals between the line segments in the lane lines being less than a third threshold L3.
(87) In this embodiment, particular values of the first threshold L1, the second threshold L2 and the third threshold L3 may be set as actually demanded.
(88)
(89) Step 1401: the lengths of the line segments in the lane line are calculated;
(90) Step 1402: it is determined whether the ratio of the longest line segment to the shortest line segment in the lane line is greater than the first threshold L1, proceeding to step 1407 when the determination result is “yes”, and proceeding to step 1403 when the determination result is “no”;
(91) Step 1403: it is determined whether the length of the longest line segment in the lane line is less than the second threshold L2, proceeding to step 1407 when the determination result is “yes”, and proceeding to step 1404 when the determination result is “no”;
(92) Step 1404: the lengths of the intervals between the line segments in the lane line are calculated;
(93) Step 1405: it is determined whether the average value of the intervals between all the line segments in the lane line is less than the third threshold L3, proceeding to step 1407 when the determination result is “yes”, and proceeding to step 1406 when the determination result is “no”;
(94) Step 1406: it is determined that the lane line is a dotted line; and
(95) Step 1407: it is determined that the lane line is a solid line.
(96) In this embodiment, after the searching unit 115 determines the positions and types of the lane lines in the overhead-view image, corresponding lanes line may be marked according to the positions and types of the lane lines. Then, the inverse transforming unit 116 performs perspective inverse transformation on the overhead-view image with lane lines being determined, and determines the lane lines of the input image according to the result of the perspective inverse transformation to obtain the lane detection results of the input image.
(97) The inverse transforming unit 116 may perform inverse perspective transform on the overhead-view image in an existing method.
(98) For example, the inverse perspective transformation is performed on the overhead-view image according to formula (3) as below:
[u,v,w]=[x′,y′,w′].Math.T.sup.−1 (3);
(99) where, (u, v) are coordinates of pixels in the image after perspective inverse transformation, (x=x′/w′, y=y′/w′) are coordinates of pixels in the overhead-view image, w′ and w are coordinate transformation coefficients, and T.sup.−1 is an inverse matrix of the perspective transformation matrix T.
(100) After the inverse perspective transformation is performed on the overhead-view image, the image after the inverse perspective transformation may be superimposed on the input image to obtain the lane detection result of the input image, such as an input image with lane lines being marked.
(101)
(102) In this embodiment, the lane detection is achieved by detecting the lane lines, and two adjacent lane lines constitute a lane.
(103) A process of detecting an input image in a plurality of input images by the detecting unit 110 is described above. And for the plurality of input images, the above detection may be performed respectively to obtain lane detection results of the input images; thereafter, the synthesizing unit 120 obtains the synthetical lane detection result according the lane detection results of the input images.
(104) For example, the synthesizing unit 120 synthesizes detection results of lanes of the input images where all number of lanes are detected, to obtain the synthetical lane detection result.
(105) For example, the plurality of input images are total of eight input images, which are input image 1, input image 2, . . . , and input image 8, respectively. For example, four lane lines are detected in input images 1, 3, 4, 6, 7, 8, and three lane lines are detected in input images 2 and 5. Then, lane detection results of the input images 2 and 5 are removed, that is, lane detection results of input images in which all detectable number of lane lines are detected are reserved.
(106) In this embodiment, the synthesizing unit 120 may superimpose pixels of non-zero values on the lane lines in the input images where all the number of lane lines are detected to obtain superimposed lane lines, that is, the synthetical detection result.
(107) In addition, in a case where the types of the lane line are determined, the synthesizing unit 120 may determine the types of the lane lines according to proportions of the respectively determined types of the lane lines in the same lane line in the input images.
(108) For example, in input images 1, 3, 4, 6, 7, 8 in which all the number of lane lines are detected, detection results of input images 1, 3, 6, 7, 8 are solid lines for the leftmost lane line, and a detection result of input image 4 is a dotted line. Hence, a ratio of the lane lines that are detected as solid lines is 5/6, which exceeds a preset threshold 0.5, thereby synthetically determining the lane lines as a solid line.
(109) It can be seen from the above embodiment that first, preprocessing is performed based on semantic segmentation to remove interference objects in a binary image, which may improve accuracy of the lane detection and may be applicable to various road scenarios, and the result based on semantic segmentation may automatically extract a lane region image containing all the lanes, thereby automatically performing perspective transformation to perform search and fitting on the lanes and achieving multi-lane detection. And furthermore, by synthesizing detection results of a plurality of input images, accuracy and integrity of the lane detection may further be improved.
Second Embodiment
(110) The embodiment of this disclosure provides an electronic device.
(111)
(112) As shown in
(113) In one implementation, the functions of the lane detection apparatus described in the first Embodiment may be integrated into the central processing unit 1701. The central processing unit 1701 may be configured to: respectively detect a plurality of input images to obtain lane detection results of the input images; and obtain a synthetical lane detection result according to the lane detection results of the input images; wherein, detecting one of the input images includes: determining regions of objects of various types in the input image based on semantic segmentation; removing interference objects in a binary image of the input image according to the regions of objects of various types in the input image, to obtain a preprocessed binary image; extracting a lane region image from the preprocessed binary image according to the regions of objects of various types in the input image; performing perspective transformation on an extracted lane region image to obtain an overhead-view image; performing lane line search and fitting on the overhead-view image, to determine lane lines in the overhead-view image; and performing perspective inverse transformation on the overhead-view image with lane lines being determined, and determine lane lines of the input images according to a result of the perspective inverse transformation, to obtain the lane detection result of the input image.
(114) For example, the extracting a lane region image from the preprocessed binary image according to the regions of objects of various types in the input image includes: extracting images of regions where lane lines in the preprocessed binary image are located according to regions where lane lines in the objects of various types are located; and correcting the image of the regions where the lane lines are located to obtain the lane region image.
(115) For example, the performing lane line search and fitting on the overhead-view image, to determine lane lines in the overhead-view image, includes: calculating a histogram of accumulated pixel values in the overhead-view image; determining the number of lane lines in the overhead-view image according to the number of waveforms in the histogram; searching the lane lines in the overhead-view image by using a sliding window, and detect pixels of non-zero values; and fitting the detected pixels of non-zero values to determine the lane lines in the overhead-view image.
(116) For example, the detecting one of the input images further includes: determining lengths of line segments and/or lengths of intervals between the line segments according to determined lane lines in the overhead-view image; and determining types of the lane lines according to the lengths of line segments and/or the lengths of intervals between the line segments in the lane lines.
(117) For example, the determining types of the lane lines according to the lengths of line segments and/or the lengths of intervals between the line segments in the lane lines includes: determining the lane lines as solid lines when the lengths of line segments and/or the lengths of intervals between the line segments satisfy at least one of the following conditions, otherwise, determining the lane lines as dotted lines: a ratio of a longest line segment to a shortest line segment in the lane lines being greater than a first threshold; a length of the longest line segment in the lane lines being less than a second threshold; and an average value of all the intervals between the line segments in the lane lines being less than a third threshold.
(118) For example, the obtaining a synthetical lane detection result according to the lane detection results of the input images includes: synthesizing detection results of lanes of the input images where all number of lanes are detected, to obtain the synthetical lane detection result.
(119) For example, the synthesizing detection results of lanes of the input images where all number of lanes are detected, to obtain the synthetical lane detection result, includes: determining a type of a lane according to proportions of types of the lane respectively determined in the input images.
(120) In another implementation, the lane detection apparatus described in the first Embodiment and the central processing unit 1701 may be configured separately; for example, the lane detection apparatus may be configured as a chip connected to the central processing unit 1701, and the functions of the lane detection apparatus are executed under control of the central processing unit 1701.
(121) In this embodiment, the electronic device 1700 does not necessarily include all the components shown in
(122) As shown in
(123) The memory 1702 may be, for example, one or more of a buffer memory, a flash memory, a hard drive, a mobile medium, a volatile memory, a nonvolatile memory, or other suitable devices, which may store the information on configuration, etc., and furthermore, store programs executing related information. And the central processing unit 1701 may execute programs stored in the memory 1702, so as to realize information storage or processing, etc. Functions of other parts are similar to those of the related art, which shall not be described herein any further. The parts of the terminal device, or the electronic device 1700 may be realized by specific hardware, firmware, software, or any combination thereof, without departing from the scope of this disclosure.
(124) It can be seen from the above embodiment that first, preprocessing is performed based on semantic segmentation to remove interference objects in a binary image, which may improve accuracy of the lane detection and may be applicable to various road scenarios, and the result based on semantic segmentation may automatically extract a lane region image containing all the lanes, thereby automatically performing perspective transformation to perform search and fitting on the lanes and achieving multi-lane detection. And furthermore, by synthesizing detection results of a plurality of input images, accuracy and integrity of the lane detection may further be improved.
Third Embodiment
(125) The embodiment of this disclosure provides a lane detection method, corresponding to the lane detection apparatus described in the first Embodiment.
(126) Step 1801: a plurality of input images are respectively detected to obtain lane detection results of the input images; and
(127) Step 1802: a synthetical lane detection result is obtained according to detection results of lanes in the input images.
(128)
(129) Step 1901: regions of objects of various types in the input image are determined based on semantic segmentation;
(130) Step 1902: interference objects in a binary image of the input image are removed according to the regions of objects of various types in the input image, to obtain a preprocessed binary image;
(131) Step 1903: a lane region image is extracted from the preprocessed binary image according to the regions of objects of various types in the input image;
(132) Step 1904: perspective transformation is performed on an extracted lane region image to obtain an overhead-view image;
(133) Step 1905: lane line search and fitting is performed on the overhead-view image, to determine lane lines in the overhead-view image; and
(134) Step 1906: perspective inverse transformation is performed on the overhead-view image with lane lines being determined, and lane lines of the input images are determined according to a result of the perspective inverse transformation, to obtain the lane detection result of the input image.
(135) In the embodiment of this disclosure, reference may be made to what is described in the first Embodiment for particular implementations of the above steps, which shall not be described herein any further.
(136) It can be seen from the above embodiment that first, preprocessing is performed based on semantic segmentation to remove interference objects in a binary image, which may improve accuracy of the lane detection and may be applicable to various road scenarios, and the result based on semantic segmentation may automatically extract a lane region image containing all the lanes, thereby automatically performing perspective transformation to perform search and fitting on the lanes and achieving multi-lane detection. And furthermore, by synthesizing detection results of a plurality of input images, accuracy and integrity of the lane detection may further be improved.
(137) An embodiment of the present disclosure provides a computer readable program, which, when executed in a lane detection apparatus or an electronic device, will cause a computer to carry out the lane detection method as described in the third Embodiment in the lane detection apparatus or the electronic device.
(138) An embodiment of the present disclosure provides a computer storage medium, including a computer readable program code, which will cause a computer to carry out the lane detection method as described in the third Embodiment in a lane detection apparatus or an electronic device.
(139) The lane detection method carried out in the lane detection apparatus or the electronic device as described with reference to the embodiments of this disclosure may be directly embodied as hardware, software modules executed by a processor, or a combination thereof. For example, one or more functional block diagrams and/or one or more combinations of the functional block diagrams shown in
(140) The soft modules may be located in an RAM, a flash memory, an ROM, an EPROM, and EEPROM, a register, a hard disc, a floppy disc, a CD-ROM, or any memory medium in other forms known in the art. A memory medium may be coupled to a processor, so that the processor may be able to read information from the memory medium, and write information into the memory medium; or the memory medium may be a component of the processor. The processor and the memory medium may be located in an ASIC. The soft modules may be stored in a memory of a mobile terminal, and may also be stored in a memory card of a pluggable mobile terminal. For example, if equipment (such as a mobile terminal) employs an MEGA-SIM card of a relatively large capacity or a flash memory device of a large capacity, the soft modules may be stored in the MEGA-SIM card or the flash memory device of a large capacity.
(141) One or more functional blocks and/or one or more combinations of the functional blocks in
(142) This disclosure is described above with reference to particular embodiments. However, it should be understood by those skilled in the art that such a description is illustrative only, and not intended to limit the protection scope of the present disclosure. Various variants and modifications may be made by those skilled in the art according to the principle of the present disclosure, and such variants and modifications fall within the scope of the present disclosure.