Vehicle and vehicle parking system
09725116 · 2017-08-08
Assignee
Inventors
Cpc classification
B62D15/0285
PERFORMING OPERATIONS; TRANSPORTING
B60W10/18
PERFORMING OPERATIONS; TRANSPORTING
B60W30/06
PERFORMING OPERATIONS; TRANSPORTING
B60W10/04
PERFORMING OPERATIONS; TRANSPORTING
B60W2754/10
PERFORMING OPERATIONS; TRANSPORTING
B62D15/027
PERFORMING OPERATIONS; TRANSPORTING
B60W10/20
PERFORMING OPERATIONS; TRANSPORTING
International classification
A01B69/00
HUMAN NECESSITIES
B62D12/00
PERFORMING OPERATIONS; TRANSPORTING
B62D6/00
PERFORMING OPERATIONS; TRANSPORTING
B60W10/20
PERFORMING OPERATIONS; TRANSPORTING
B63H25/04
PERFORMING OPERATIONS; TRANSPORTING
G05D1/00
PHYSICS
B62D11/00
PERFORMING OPERATIONS; TRANSPORTING
G06F17/00
PHYSICS
B62D15/02
PERFORMING OPERATIONS; TRANSPORTING
B60W10/04
PERFORMING OPERATIONS; TRANSPORTING
B60W10/18
PERFORMING OPERATIONS; TRANSPORTING
G06F7/00
PHYSICS
Abstract
A vehicle is provided. The vehicle includes a camera configured to detect a target object in a parking space and a controller programmed to advance the vehicle into the parking space based on a yaw angle of the vehicle and a distance to the target object in response to the camera detecting the presence of the target object. The distance to the target object is based on a vector representing a boundary of the target object.
Claims
1. A method of parking a vehicle comprising: detecting a target object in a parking space with a camera; and advancing the vehicle into the parking space based on an angle between the target object and a perpendicular of a camera field of vision, and a distance from the camera to the target object based on a regression analysis that includes a vector representing a boundary segment of the target object as an input variable.
2. The method of claim 1, wherein the vector is generated based on an observed horizontal projection of the boundary segment and a distortion of the target object.
3. The method of claim 2, wherein the regression analysis also includes an angle between the vector and the horizontal projection as an input variable.
4. The method of claim 3, wherein the regression analysis also includes a position of the target object in the camera field of vision as an input variable.
5. The method of claim 3, wherein the distortion is based on a difference in position between a camera detected centroid of the target object and an expected centroid of the target object.
6. The method of claim 5, wherein the expected centroid is generated based on an observed second boundary segment of the target object.
7. The method of claim 1, wherein the distance from the camera to the target object is based on a single observation of the location the target object.
8. The method of claim 1, wherein the regression analysis determines the distance to the target object based on the pixel location of the vector in the camera field of vision.
9. A vehicle comprising: a camera configured to detect a target object in a parking space; and a controller programmed to, in response to the camera detecting the target object, advance the vehicle into the parking space based on a vehicle yaw angle and a distance to the target object, the distance being based on a regression analysis that includes a vector representing a boundary segment of the target object as an input variable.
10. The vehicle of claim 9, wherein the vector is generated based on an observed horizontal projection of the boundary segment and a distortion of the target object.
11. The vehicle of claim 10, wherein the regression analysis also includes an angle between the vector and the horizontal projection as an input variable.
12. The vehicle of claim 11, wherein the regression analysis also includes a position of the target object in a field of vision of the camera as an input variable.
13. The vehicle of claim 11, wherein the distortion is based on a difference in position between a camera detected centroid of the target object and an expected centroid of the target object.
14. The vehicle of claim 13, wherein the expected centroid is generated based on an observed second boundary segment of the target object.
15. The vehicle of claim 9, wherein the regression analysis determines the distance to the target object based on the pixel location of the vector in the camera field of vision.
16. A vehicle comprising: a camera configured to output a signal indicating the presence of a target object in a parking space; and a controller in communication with the camera and programmed to, in response to receiving the signal, advance the vehicle into the parking space based on a distance from the camera to the target object and an angle between the target object and a perpendicular of a field of vision of the camera, the distance being based on a regression analysis that includes a vector representing a boundary segment of the target object, an angle between the vector and a horizontal projection of the boundary segment, and a position of the target object in the field of vision as input variables.
17. The vehicle of claim 16, wherein the vector is generated based on the horizontal projection and a distortion of the target object.
18. The vehicle of claim 17, wherein the distortion is based on a difference in position between a camera detected centroid of the target object and an expected centroid of the target object.
19. The vehicle of claim 18, wherein the expected centroid of the target object is generated based on an observed second boundary segment of the target object.
20. The vehicle of claim 16, wherein the regression analysis determines the distance to the target object based on the pixel location of the vector in the camera field of vision.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION
(7) Embodiments of the present disclosure are described herein. It is to be understood, however, that the disclosed embodiments are merely examples and other embodiments may take various and alternative forms. The figures are not necessarily to scale; some features could be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention. As those of ordinary skill in the art will understand, various features illustrated and described with reference to any one of the figures may be combined with features illustrated in one or more other figures to produce embodiments that are not explicitly illustrated or described. The combinations of features illustrated provide representative embodiments for typical applications. Various combinations and modifications of the features consistent with the teachings of this disclosure, however, could be desired for particular applications or implementations.
(8) Referring to
(9) While illustrated as one controller, the controller 18 may be part of a larger control system and may be controlled by various other controllers throughout the vehicle 10, such as a vehicle system controller (VSC). It should therefore be understood that the controller 18 and one or more other controllers can collectively be referred to as a “controller” that controls various functions of the vehicle 10 and/or actuators in response to signals from various sensors. Controller 18 may include a microprocessor or central processing unit (CPU) in communication with various types of computer readable storage devices or media. Computer readable storage devices or media may include volatile and nonvolatile storage in read-only memory (ROM), random-access memory (RAM), and keep-alive memory (KAM), for example. KAM is a persistent or non-volatile memory that may be used to store various operating variables while the CPU is powered down. Computer-readable storage devices or media may be implemented using any of a number of known memory devices such as PROMs (programmable read-only memory), EPROMs (electrically PROM), EEPROMs (electrically erasable PROM), flash memory, or any other electric, magnetic, optical, or combination memory devices capable of storing data, some of which represent executable instructions, used by the controller in controlling the vehicle.
(10) Referring Now to
(11) Referring to
(12) Referring to
(13) In the illustrated example, the boundary of the target object 16 includes four segments, a right side 30, a left side 32, a topside 34, and a bottom side 36. Also, in the illustrated example, the expected shape of the target object 16 is a rectangle. The expected centroid 28 is generated based on a detection of the right side 30 of the boundary of the target object 16 and the expected rectangular shape of the target object 16. The detected centroid 26 is generated based on a center of the right side 30, left side 32, topside 34, and bottom side 36 of the boundary of the target object 16. The difference in position between the expected centroid 28 and the detected centroid 26 define a distortion of the target object 16 in the field of vision of the camera 24 (the distortion of the target object 16 may also be referred to as the skew). The detected centroid 26 and the expected centroid 28 may be generated using software that detects and relates geometric patterns, shapes, configurations, etc.
(14) Although the target object 16 is a rectangle and the right side 30 of the boundary of the target object 16 is used to generate the expected centroid 28 in the illustrated example, it should be noted that the target object 16 may be comprised of other shapes and the expected centroid 28 may be generated relative to any of the segments comprising the boundary of the target object 16.
(15) An example of a method that may be used to quantify the distortion (skew) may include edge detection, filtering/thresholding, using a Hough transform (which may also be referred to as a Hough transformation) to calculate an angle from the perpendicular of line on an plane to the horizontal axis of the plane (hereinafter the plane will be referred to as the XY plane and the horizontal axis will be referred to as the x-axis), and comparing the angle from perpendicular of the line on the XY plane to the x-axis (that was calculated using the Hough transform) to an expected angle from the perpendicular of a segment of the target object 16 to the horizontal of the field of vision of the camera 24.
(16) Edges of the target object 16 may be detected using edge detection software. Edge detection software is used to generate a gradient by comparing the values of neighboring pixels in the field of vision of the camera 24 of a grayscale or monochromatic image. Once the comparison of the neighboring pixels in the field of vision of the camera 24 is made, a resulting image will be generated indicating a set of differences that illustrate the detected edges, with high contrast edges being represented as larger differences.
(17) An edge detection algorithm may be based on the following equations (1)-(3):
B(j, k)=√{square root over ([B.sub.h(j, k)].sup.2+[B.sub.v(j, k)].sup.2)} (1)
B.sub.h(j, k)=A(j, k+1)−A(j, k−1) (2)
B.sub.v(j, k)=A(j+1, k)−A(j−1, k) (3)
A is matrix having j rows and k columns that represents a grayscale or monochromatic image.
B is a matrix representing the resulting gradient from comparing the values of neighboring pixels in the grayscale or monochromatic image.
(18) The edge detection software may also include filtering/thresholding. For example, a threshold may be set such that only the most prominent changes of the resulting gradient, represented by the matrix B, will be shown in the resulting image that represents the edges detected in the grayscale or monochromatic image. Additionally, the matrix A may be preprocessed to show a specific color channel or region of interest in the field of vision of a camera.
(19) A single edge that was detected, using the edge detection software, forms a series of points, along a line, on the XY plane in the field of vision of the camera 24. A Hough transform may be performed on the series of points (or pixels) formed by the detected edge. The Hough transform involves generating a series of lines through each point (or a group of selected points) formed by a single detected edge. Each line, of the series of lines drawn through each point formed by the detected edge, is related to the XY plane by a perpendicular line. Next, an angle between the perpendicular line and the x-axis, and the length of the perpendicular line from the origin to the point where the perpendicular line intersects the single edge of interest, are generated. Each point of the detected edge may then be represented in Hough space as a sinusoidal curve plotted as the angle vs. the distance of the perpendicular lines from the origin to the single edge of interest. When the sinusoidal curves are plotted, the line that passes through all the points formed by a detected edge on the XY plane, in the field of vision of the camera 24, is represented by the point in Hough space where the sinusoidal curves overlap. The point in space where the sinusoidal curves overlap gives the coordinates (length from the origin to the detected edge that forms a series of points and angle from the x-axis) of a line that is perpendicular to the detected edge that forms the series of points. The skew may then be determined by the difference in the angle at the point in Hough space where the sinusoidal curves overlap and the expected angle from the perpendicular of a segment of the target object 16 to the horizontal of the field of vision of the camera 24. Examples of the Hough transform are shown in Hough, U.S. Pat. No. 3,069,654 and Duda, R. O. and P. E. Hart, “Use of the Hough Transformation to Detect Lines and Curves in Pictures,” Communications of the ACM, Vol. 15 No. 1, pp. 11-15 (January 1972), the contents of each of which is hereby incorporated by reference in its entirety.
(20) It should be noted that other methods, other than the Hough transform, may be used to detect the skew (distortion) of the target object 16 including, but not limited to, the Fourier method, projection profile method, nearest neighbor clustering method, and the correlation method.
(21) When determining the skew of the target object 16, the process may also take into account the internal properties of the lens that cause distortion (including radial and/or tangential distortion) of the projected image on the image plane of the camera. There are several algorithms and equations that are known in the art, that should be construed as disclosed herein, that are used to correct either barrel or pincushion type distortion of a camera lens.
(22) Referring to
(23) Referring to
(24) The regression analysis algorithm 42 may consist of comparing new data to previously accumulated data (which may also be referred to as training data). The training data is utilized to create a map of inputs and outputs and should be designed such that new data scales with minimal error when compared to the training data. Here, the training data and the new data may relate pixel location in the camera field of vision 24 to distance from the camera 14. Therefore, once the direction and magnitude of the vector 38 that represents a segment of the boundary of the target object 16 are determined, it is possible to determine the distance to the vector 38 (i.e., the distance from the camera 14 to the target object 16) by comparing the pixel location of the vector 38 to the previous learned data that represents pixel location and distance.
(25) The regression analysis algorithm 42 may consist of a hypothesis function that is used to map the data points and a cost function that is used to compute the accuracy of the hypothesis function. Examples of the regression analysis algorithm 42 may include, but are not limited to, linear models, polynomial models, logistic models, and neural networks.
(26) A linear regression analysis may be based on the following hypothesis function equation (4) and cost function equation (5):
(27)
(28) A Polynomial regression analysis may be based on the following hypothesis function equation (6) and cost function equation (7):
(29)
(30) A Logistic regression analysis may be based on the following hypothesis function equation (8) and cost function equation (9):
(31)
(32) Referring to
(33) The words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the disclosure. As previously described, the features of various embodiments may be combined to form further embodiments of the invention that may not be explicitly described or illustrated. While various embodiments could have been described as providing advantages or being preferred over other embodiments or prior art implementations with respect to one or more desired characteristics, those of ordinary skill in the art recognize that one or more features or characteristics may be compromised to achieve desired overall system attributes, which depend on the specific application and implementation. These attributes may include, but are not limited to cost, strength, durability, life cycle cost, marketability, appearance, packaging, size, serviceability, weight, manufacturability, ease of assembly, etc. As such, embodiments described as less desirable than other embodiments or prior art implementations with respect to one or more characteristics are not outside the scope of the disclosure and may be desirable for particular applications.