DETECTION APPARATUS AND METHOD FOR PARKING SPACE, AND IMAGE PROCESSING DEVICE
20170177956 ยท 2017-06-22
Assignee
Inventors
Cpc classification
International classification
Abstract
A detection apparatus and method for parking space detection and an image processing device where the detection method includes: performing conversion on a side-view image that is photographed on the parking space and is acquired from a camera, to obtain a top-view image including said parking space; acquiring an edge image including a plurality of edges based on gradient information of said top-view image; performing conversion on said edge image and obtains a voting vector according to said gradient information, and determining marking lines according to peak values of said voting vector; and determining one or more parking spaces based on a plurality of said marking lines.
Claims
1. A detection apparatus for a parking space, comprising: an angle conversion unit configured to perform conversion of a side-view image a photograph of the parking space acquired via a camera, to obtain a top-view image of said parking space; an edge acquisition unit configured to acquire an edge image comprising a plurality of edges based on gradient information of said top-view image; a marking line determination unit configured to perform conversion of said edge image and obtain a voting vector according to said gradient information, and determine marking lines according to peak values of said voting vector; and a parking space determination unit configured to determine one or more parking spaces based on a plurality of said marking lines.
2. The detection apparatus according to claim 1, wherein the detection apparatus further comprises: an angle recovery unit configured to perform conversion on the top-view image including one or more of said parking spaces to obtain the side-view image including said parking spaces; and an image display unit configured to display one of said top-view image and said side-view image comprising said parking spaces.
3. The detection apparatus according to claim 1, wherein the detection apparatus further comprises: a target selection unit configured to select a target parking space from the one or more said parking spaces; and an information generation unit configured to generate parking guidance information based on a positional relationship between said target parking space and a vehicle.
4. The detection apparatus according to claim 1, wherein said angle conversion unit is configured to convert said side-view image into said top-view image based on parameters of said camera; wherein said parameters comprise a focal length of said camera, an included angle between said camera and a horizontal plane, and a height of said camera from a ground.
5. The detection apparatus according to claim 1, wherein said edge acquisition unit comprises: an information acquisition unit configured to acquire a gradient intensity and a gradient direction of said top-view image, and calculate direction information based on a histogram of said gradient direction; an image difference unit configured to perform difference processing on said top-view image to obtain difference information; a circular filtering unit configured to construct a circular filter of which a diameter parameter is a first preset threshold, and filter said top-view image by using said circular filter to obtain circular filter response information; a linear filtering unit configured to construct a linear filter of which a width parameter is a second preset threshold according to said direction information, and filter said top-view image by using said linear filter to obtain linear filter response information; an edge image generation unit configured to generate said edge image based on said gradient intensity, said difference information, said circular filter response information and said linear filter response information.
6. The detection apparatus according to claim 5, wherein said edge image generation unit is configured to generate pixels in said edge image according to:
7. The detection apparatus according to claim 1, wherein said marking line determination unit is further configured to determine two edges of one of said marking lines according to a fifth preset threshold; wherein said fifth preset threshold comprises a threshold of one of a distance between the two edges of the marking line and gradient direction of the two edges of the marking line.
8. The detection apparatus according to claim 1, wherein said parking space determination unit is further configured to determine two parking marking lines of a particular parking space from a plurality of said marking lines according to a sixth preset threshold; and determine a region formed by said two parking marking lines as said parking space; wherein said sixth preset threshold comprises one of or a combination of: a threshold of distance between the two parking marking lines of the parking space, a threshold of a length difference between parking marking lines of the parking space and a threshold of a color difference between parking marking lines of the parking space.
9. A detection method for a parking space, comprising: performing conversion of a side-view image that is a photographof the parking space and is acquired from a camera, to obtain a top-view image comprising said parking space; acquiring an edge image comprising a plurality of edges based on gradient information of said top-view image; performing conversion of said edge image and obtaining a voting vector according to said gradient information, and determining marking lines according to peak values of said voting vector; and determining one or more parking spaces based on a plurality of said marking lines.
10. An image processing device comprising the detection apparatus for parking space according to claim 1.
11. The detection apparatus according to claim 1, wherein the detection apparatus further comprises: a guidance unit providing parking guidance information for the parking space to a driver.
12. The detection method according to claim 9, further comprising: providing parking guidance information for the parking space to a driver.
13. A non-transitory computer readable recoding medium storing a detection method for a parking space, the method comprising: performing conversion on a side-view image that is photographed on the parking space and is acquired from a camera, to obtain a top-view image comprising said parking space; acquiring an edge image comprising a plurality of edges based on gradient information of said top-view image; performing conversion on said edge image and obtains a voting vector according to said gradient information, and determining marking lines according to peak values of said voting vector; and determining one or more parking spaces based on a plurality of said marking lines.
14. A method, comprising: performing conversion of a side-view image of the parking space into a top-view of image; determining gradient information of edges of the top view image; obtaining a voting vector using said gradient information, and determining space marking lines using peak values of said voting vector; determining the parking space based on said marking lines; and providing parking guidance information for the parking space to a driver.
15. A non-transitory computer readable recoding medium storing a method, the method comprising: performing conversion of a side-view image of the parking space into a top-view of image; determining gradient information of edges of the top view image; obtaining a voting vector using said gradient information, and determining space marking lines using peak values of said voting vector; determining the parking space based on said marking lines; and providing parking guidance information for the parking space to a driver.
16. An apparatus, comprising: a central processing unit having a processor and a memory, the processor including: an angle conversion unit configured to perform conversion of a side-view image of the parking space into a top-view of image; an edge acquisition unit configured to determine gradient information of edges of the top view image; a marking line determination unit configured to obtain a voting vector using said gradient information, and determining space marking lines using peak values of said voting vector; a parking space determination unit configured to determine the parking space based on said marking lines; and a guidance unit configured to provide parking guidance information for the parking space to a driver.
17. A method, comprising: performing conversion of a side-view image of the parking space into a top-view of image; determining gradient information of edges of the top view image; obtaining a voting vector using said gradient information, and determining space marking lines using peak values of said voting vector; determining the parking space based on said marking lines; and providing parking guidance information for the parking space to a driver comprising multiple different perspective views of the parking space.
18. The detection method according to claim 17, wherein the multiple different perspective views of the parking space comprise a side view and a top view.
19. The detection method according to claim 17, wherein the multiple different perspective views of the parking space provide distance to and position of the parking space.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0027] The included accompanying drawings are used for providing further understanding to the embodiment of the present disclosure and constitute a part of the Description, for illustrating the embodiments of the present disclosure and interpreting principle of the present disclosure together with verbal description. Obviously, the accompanying figures in the following description are merely some embodiments of the disclosure, and it is practicable for those skilled in the art to obtain other accompanying figures according to these ones in the premise of making no creative efforts. In the drawings:
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
DETAILED DESCRIPTION
[0043] Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below by referring to the figures.
[0044] The aforementioned and other features of the embodiments of the disclosure will become apparent from the following description with reference to the accompanying drawings. In the description and its accompanying drawings, specific embodiments of the disclosure are disclosed, which specifies part of the embodiments in which principle of the examples of the disclosure can be adopted. It should be understood that, the present disclosure is not limited to the described embodiments, but on the contrary, the examples of the present disclosure includes all modifications, variations and equivalents that fall within the scope of the appended claims.
The First Embodiment
[0045] The embodiment of the present disclosure provides a detection method for parking space, for automatically detecting the parking space by processing an image acquired by a camera.
[0050] In this embodiment, a camera can be provided at a rear part of a vehicle, for example at a bumper, to acquire video of circumstance behind the vehicle. But the present disclosure is not limited to this, the camera can also be provided at any position of the vehicle according to the need. Through the video took by the camera, a side-view image (also referred to as a rear view image, represented by I.sub.rear) of a parking space can be acquired.
[0051]
[0052] In the step 101, it is able to perform conversion on the side-view image to obtain a top-view image (also referred to as a bird-view image, represented by I.sub.bird) including a parking space. For example, it is able to convert the side-view image into the top-view image based on parameters of the camera; said parameters may include the following information: a focal length L of said camera, an included angle between said camera and a horizontal plane, and a height H of said camera from the ground. But the present disclosure is not limited to this, and for example other parameters can also be used for performing conversion.
[0053]
[0054]
[0055] In the step 102, it is possible to acquire an edge image including a plurality of edges based on gradient information of said top-view image.
[0056]
R.sub.circ=I.sub.bird*h.sub.circ. [0064] A step 504 of constructing a line filter of which a width parameter is a second preset threshold according to said direction information, and filtering said top-view image by using said line filter to obtain line filter response information; [0065] in this embodiment, a width parameter w.sub.line of the line filter h.sub.line is the second preset threshold; [0066] for example, w.sub.line=width.sub.line, this width.sub.line may be width of a typical parking marking line, and can be determined using an experience value in advance. Thereby line filter response information, for example, can be expressed as:
R.sub.line=I.sub.bird*h.sub.line. [0067] A step 505 of generating said edge image based on said gradient intensity, said difference information, said circular filter response information and said line filter response information.
[0068] In this embodiment, pixels in said edge image may be generated according to the following formula:
where, (i, j) denotes a pixel to be generated; Diff () denotes said difference information, threshold.sub.diff is a third preset threshold; Gs() denotes said gradient intensity; (i.sub.prev, j.sub.prev), (i.sub.next, j.sub.next) are two adjacent pixels of said pixels (i, j) in said gradient direction; R.sub.circ and R.sub.line respectively denote said circular filter response information and said line filter response information, threshold.sub.R is a fourth preset threshold.
[0069] That is, if the above condition is satisfied, then the pixel value Edge (i, j) of the pixel (i, j) in the edge image is 1, otherwise the pixel value Edge (i, j) is 0. Thereby a binarization image including a plurality of edges can be obtained. It is worth noting that,
[0070]
[0071] In the step 103, it is possible to perform conversion on said edge image and obtains a voting vector according to said gradient information, and to determine marking lines according to peak values of said voting vector.
[0072] For example, it is possible to perform Hough conversion on the edge image and obtain a voting vector Arr.sub.Hough (r, ) of parameter space; r represents a distance and represents an angle. For the pixel (i, j), if Edge (i, j) is 1, then
Arr.sub.Hough(r=i cos +j sin ,)plus 1, =1,2,3 . . . 180;
[0073] Based on the direction information dir obtained in the step 501, a one-dimensional voting vector will be obtained:
vec.sub.Hough(r)=Arr.sub.Hough(r,=dir).
[0074] In this voting vector vec.sub.Hough(r), each peak value indicates a marking line in the previously obtained direction dir in the edge image Edge; thereby the marking line can be determined according to the peak value in the voting vector; moreover, such method of determining the marking line according to the peak value of the voting vector can better remove interferences, and can further improve accuracy of detection.
[0075]
[0076] Furthermore, it is also possible to further determine two edges of the marking line according to a fifth preset threshold; said fifth preset threshold includes a threshold (a sixth threshold) of distance between the two edges of the marking line, and/or gradient direction of the two edges of the marking line.
[0077] For example, each marking line has two edges, if the distance between two edges is equal to or approximately equal to the width of a typical marking line (for example the line width is 10 cm), and the two edges has opposite gradient directions, then it can be determined that the two edges are edges of some marking line, so as to extract the marking line.
[0078]
[0079] In the step 104, it is possible to determine one or more parking spaces based on a plurality of said marking lines. It is possible to determine two parking marking lines of a certain or particular parking space from a plurality of said marking lines according to a sixth preset threshold; and determine a region formed by said two parking marking line as a parking space.
[0080] Said sixth preset threshold may include one of following information or any combination thereof: a threshold of distance between two parking marking lines of a parking space (for example 3 m), a threshold of a length difference between parking marking lines of a parking space (for example 10 cm) and a threshold of a color difference between parking marking lines of a parking space (for example RGB value is 10). But the present disclosure is not limited to this, and for example the parking space can also be determined according to other parameters.
[0081] For example, if the distance between two marking lines is about 3 m, the length difference between the two does not exceed 10 cm, the difference between RGB values of the two does not exceed 10, then it can be determined that the region between the two marking lines conforms to the feature of a typical parking space.
[0082]
[0083]
[0088] As shown in
[0091]
[0092] As shown in
[0095] In this embodiment, it is possible to automatically select a target parking space (for example the parking space closest to the vehicle), and it is also possible for the driver to manually select a target parking space and input corresponding information. Furthermore, it is possible to generate parking guidance information based on positional relationship between the target parking space and the vehicle, for example, alarm information for prompting the distance between the target parking space and the vehicle, and so on. Thereby after the parking space is detected automatically, parking guidance information can be better provided.
[0096] It can be seen from the above embodiment that: performing conversion on a side-view image that is photographed on the parking space and is acquired from a camera to obtain a top-view image; acquiring an edge image based on gradient information of the top-view image, and determining marking lines according to peak values of said voting vector. Thereby, it is able not only to visually and accurately observe distance and position of a parking space, but also to automatically detect the parking space, and accuracy of detection is higher.
The Second Embodiment
[0097] The embodiment of the present disclosure provides a detection apparatus for parking space, and contents the same as that of the first embodiment will not be repeated.
[0098]
[0103]
[0104] As shown in
[0107] As shown in
[0110] In this embodiment, said angle conversion unit 1201 may be configured to convert said side-view image into said top-view image based on parameters of said camera; said parameters includes a focal length of said camera, an included angle between said camera and a horizontal plane, and a height of said camera from the ground.
[0111] Said marking line determination unit 1203 may also be used for further determining two edges of said marking line according to a fifth preset threshold; said fifth preset threshold may include a threshold of distance between the two edges of the marking line and/or gradient direction of the two edges of the marking line; but the present disclosure is not limited to this.
[0112] Said parking space determination unit 1204 may also be used for determining two parking marking lines of a certain or particular parking space from a plurality of said marking lines according to a sixth preset threshold; and determining a region formed by said two parking marking line as said parking space; [0113] said sixth preset threshold may include one of following information or any combination thereof: a threshold of distance between two parking marking lines of a parking space, a threshold of a length difference between parking marking lines of a parking space and a threshold of a color difference between parking marking lines of a parking space; but the present disclosure is not limited to this.
[0114]
[0120] The edge image generation unit 1405 may be configured to generate pixels in said edge image according to the following formula:
where, (i, j) denotes a pixel to be generated; Diff () denotes said difference information, threshold.sub.diff is a third preset threshold; Gs () denotes said gradient intensity; (i.sub.prev, j.sub.prev), (i.sub.next, j.sub.next) are two adjacent pixels of said pixels (i, j) in said gradient direction; R.sub.circ and R.sub.line respectively denote said circular filter response information and said line filter response information, threshold.sub.R is a fourth preset threshold.
[0121] It can be seen from the above embodiment that: performing conversion in side-view image that is a photograph of the parking space and is acquired from a camera to obtain a top-view image; acquiring an edge image based on gradient information of the top-view image, and determining marking lines according to peak values of said voting vector. Thereby, it is able not only to visually and accurately observe distance and position of a parking space, but also to automatically detect the parking space, and accuracy of detection is higher.
The Third Embodiment
[0122] The embodiment of the present disclosure provides an image processing device, including: the detection apparatus for parking space according to the second embodiment.
[0123]
[0124] In one embodiment, the function of the detection apparatus 1200 or 1300 of the parking space can be integrated into the central processing unit 100. The central processing unit 100 can be configured to realize the detection method for parking space according to the first embodiment.
[0125] In another embodiment, the detection apparatus 1200 or 1300 of the parking space can be configured separately from the central processing unit, for example, the detection apparatus 1200 or 1300 of the parking space can be configured as a chip/chips connected to the central processing unit 100, and the function of the detection apparatus 1200 or 1300 of the parking space can be realized through control of the central processing unit 100.
[0126] Furthermore, as shown in
[0127] The embodiment of the present disclosure further provides a computer-readable program, when the program is executed in the image processing device, the program enables the image processing device to carry out the detection method for parking space according to the first embodiment.
[0128] The embodiment of the present disclosure further provides a non-transitory computer readable storage medium in which a computer-readable program or method is stored, wherein the computer-readable program or method enables an image processing device to carry out the detection method for parking space according to the first embodiment.
[0129] The above devices and methods of the disclosure can be implemented by hardware, or by combination of hardware with software. The disclosure relates to such a computer readable program that when the program is executed by a logic component, it is possible for the logic component to implement the preceding devices or constitute components, or to realize the preceding various methods or steps. The disclosure further relates to a non-transitory computer readable storage medium for storing the above programs or methods, such as a hard disk, a magnetic disk, an optical disk, a DVD, a flash memory and the like.
[0130] Hereinbefore the disclosure is described by combining specific embodiments, but those skilled in the art should understand, these descriptions are exemplary and are not limitation to the protection scope of the disclosure. Those skilled in the art can make various variations and modifications to the disclosure according to principle of the disclosure, and these variations and modifications shall fall within the scope of the disclosure.
[0131] Regarding the embodiment including the above examples, there is further provided with the following appendix:
[0132] (Appendix 1). A detection apparatus for parking space, including:
an angle conversion unit configured to perform conversion on a side-view image that is photographed on the parking space and is acquired from a camera, to obtain a top-view image comprising said parking space;
an edge acquisition unit configured to acquire an edge image comprising a plurality of edges based on gradient information of said top-view image;
a marking line determination unit configured to perform conversion on said edge image and obtain a voting vector according to said gradient information, and determine marking lines according to peak values of said voting vector; and
a parking space determination unit configured to determine one or more parking spaces based on a plurality of said marking lines.
[0133] (Appendix 2). The detection apparatus according to the appendix 1, wherein the detection apparatus further includes:
an angle recovery unit configured to perform conversion on the top-view image comprising one or more said parking spaces to obtain a side-view image comprising said parking spaces; and
an image display unit configured to display a side-view image comprising said parking spaces.
[0134] (Appendix 3). The detection apparatus according to the appendix 1, wherein the detection apparatus further includes:
a target selection unit configured to select a target parking space from one or more said parking spaces; and
an information generation unit configured to generate parking guidance information based on positional relationship between said target parking space and a vehicle.
[0135] (Appendix 4). The detection apparatus according to the appendix 1, wherein said angle conversion unit is configured to convert said side-view image into said top-view image based on parameters of said camera; wherein said parameters includes a focal length of said camera, an included angle between said camera and a horizontal plane, and a height of said camera from the ground.
[0136] (Appendix 5). The detection apparatus according to the appendix 1, wherein said edge acquisition unit includes:
an information acquisition unit configured to acquire gradient intensity and gradient direction of said top-view image, and calculate direction information based on a histogram of said gradient direction;
an image difference unit configured to perform a difference processing on said top-view image to obtain difference information;
a circular filtering unit configured to construct a circular filter of which a diameter parameter is a first preset threshold, and filter said top-view image by using said circular filter to obtain circular filter response information;
a line filtering unit configured to construct a line filter of which a width parameter is a second preset threshold according to said direction information, and filter said top-view image by using said line filter to obtain line filter response information;
an edge image generation unit configured to generate said edge image based on said gradient intensity, said difference information, said circular filter response information and said line filter response information.
[0137] (Appendix 6). The detection apparatus according to the appendix 5, wherein said edge image generation unit is configured to generate pixels in said edge image according to the following formula:
where, (i, j) denotes a pixel to be generated; Diff () denotes said difference information, threshold.sub.diff is a third preset threshold; Gs () denotes said gradient intensity; (i.sub.prev, j.sub.prev), (i.sub.next, j.sub.next) are two adjacent pixels of said pixels (i, j) in said gradient direction; R.sub.circ and R.sub.line respectively denote said circular filter response information and said line filter response information, threshold.sub.R is a fourth preset threshold.
[0138] (Appendix 7). The detection apparatus according to the appendix 1, wherein said marking line determination unit is further configured to determine two edges of said marking line according to a fifth preset threshold;
[0139] (Appendix 8). The detection apparatus according to the appendix 7, wherein said fifth preset threshold comprises a threshold of distance between the two edges of the marking line and/or gradient direction of the two edges of the marking line.
[0140] (Appendix 9). The detection apparatus according to the appendix 1, wherein said parking space determination unit is further configured to determine two parking marking lines of a certain parking space from a plurality of said marking lines according to a sixth preset threshold; and determine a region formed by said two parking marking line as said parking space.
[0141] (Appendix 10). The detection apparatus according to the appendix 9, wherein said sixth preset threshold comprises one of following information or any combination thereof: a threshold of distance between two parking marking lines of a parking space, a threshold of a length difference between parking marking lines of a parking space and a threshold of a color difference between parking marking lines of a parking space.
[0142] (Appendix 11). A detection method for parking space, including:
performing conversion on a side-view image that is photographed on the parking space and is acquired from a camera, to obtain a top-view image comprising said parking space;
acquiring an edge image comprising a plurality of edges based on gradient information of said top-view image;
performing conversion on said edge image and obtains a voting vector according to said gradient information, and determining marking lines according to peak values of said voting vector; and
determining one or more parking spaces based on a plurality of said marking lines.
[0143] (Appendix 12). The detection method according to the appendix 11, wherein the detection method further includes:
performing conversion on the top-view image comprising one or more said parking spaces to obtain a side-view image comprising said parking spaces; and
displaying a side-view image comprising said parking spaces.
[0144] (Appendix 13). The detection method according to the appendix 11, wherein the detection method further includes:
selecting a target parking space from one or more said parking spaces; and
generating parking guidance information based on positional relationship between said target parking space and a vehicle.
[0145] (Appendix 14). The detection method according to the appendix 11, wherein, converting said side-view image into said top-view image based on a parameter of said camera; wherein said parameter includes a focal length of said camera, an included angle between said camera and a horizontal plane, and a height of said camera from the ground.
[0146] (Appendix 15). The detection method according to the appendix 11, wherein, acquiring an edge image comprising a plurality of edges based on gradient information of said top-view image includes:
acquiring gradient intensity and gradient direction of said top-view image, and calculating direction information based on a histogram of said gradient direction;
performing a difference processing on said top-view image to obtain difference information;
constructing a circular filter of which a diameter parameter is a first preset threshold, and filtering said top-view image by using said circular filter to obtain circular filter response information;
constructing a line filter of which a width parameter is a second preset threshold according to said direction information, and filtering said top-view image by using said line filter to obtain line filter response information;
generating said edge image based on said gradient intensity, said difference information, said circular filter response information and said line filter response information.
[0147] (Appendix 16). The detection method according to the appendix 15, wherein pixels in said edge image are generated according to the following formula:
wherein, (i, j) denotes a pixel to be generated; Diff () denotes said difference information, threshold.sub.diff is a third preset threshold; Gs () denotes said gradient intensity; (i.sub.prev, j.sub.prev), (i.sub.next, j.sub.next) are two adjacent pixels of said pixels (i, j) in said gradient direction; R.sub.circ and R.sub.line respectively denote said circular filter response information and said line filter response information, threshold.sub.R is a fourth preset threshold.
[0148] (Appendix 17). The detection method according to the appendix 11, wherein, further determining two edges of said marking line according to a fifth preset threshold;
said fifth preset threshold comprises a threshold of distance between the two edges of the marking line and/or gradient direction of the two edges of the marking line.
[0149] (Appendix 18). The detection method according to the appendix 11, wherein, determining two parking marking lines of a certain parking space from a plurality of said marking lines according to a sixth preset threshold; and determining a region formed by said two parking marking line as said parking space;
said sixth preset threshold comprises one of following information or any combination thereof: a threshold of distance between two parking marking lines of a parking space, a threshold of a length difference between parking marking lines of a parking space and a threshold of a color difference between parking marking lines of a parking space.
[0150] (Appendix 19). An image processing device including the detection apparatus for parking space according to any one of the appendix 1 to appendix 10.
[0151] Although a few embodiments have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the embodiments, the scope of which is defined in the claims and their equivalents.