Apparatus and method for providing top view image of parking space
11615631 · 2023-03-28
Assignee
Inventors
Cpc classification
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
B62D15/0285
PERFORMING OPERATIONS; TRANSPORTING
G08G1/168
PHYSICS
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/607
PERFORMING OPERATIONS; TRANSPORTING
B62D1/00
PERFORMING OPERATIONS; TRANSPORTING
G06V10/25
PHYSICS
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
B62D15/021
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/00
PERFORMING OPERATIONS; TRANSPORTING
International classification
G06V20/58
PHYSICS
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
B62D15/02
PERFORMING OPERATIONS; TRANSPORTING
Abstract
An apparatus and method for providing a top view image of a parking space are provided. The apparatus includes a steering angle sensor that measures a steering angle of a vehicle, a top view image generator that generates a top view image of a parking space depending on travel of the vehicle, a display that displays the top view image generated by the top view image generator, and a controller that captures the top view image displayed by the display and generates a panorama top view image by connecting the current top view image generated by the top view image generator and the captured previous top view image, based on the steering angle measured by the steering angle sensor.
Claims
1. An apparatus for providing a top view image of a parking space, the apparatus comprising: a steering angle sensor configured to measure a steering angle of a vehicle; a display configured to display the top view image of the parking space depending on travel of the vehicle; a processor; and a non-transitory storage medium containing program instructions that, when executed by the processor, causes the apparatus to: capture the top view image; and generate a panorama top view image by connecting the displayed top view image and the captured top view image based on the measured steering angle, wherein the program instructions when executed are configured to remove an area of overlap between the displayed top view image and the captured top view image caused by the steering angle, wherein the program instructions when executed are configured to apply a first shading to a captured previous top view image when the steering angle is less than predetermined degrees and a distance from the vehicle is less than a first predetermined distance, apply a second shading to the captured previous top view image when the steering angle is more than the predetermined degrees and the distance from the vehicle is more than the first predetermined distance.
2. The apparatus of claim 1, wherein the program instructions when executed are configured to control the display to display the generated panorama top view image.
3. The apparatus of claim 2, wherein the apparatus further comprises: a motion detection sensor configured to detect a movement around an empty parking place.
4. The apparatus of claim 3, wherein the program instructions when executed are configured to control the display to additionally display a warning icon at a position on the panorama top view image that corresponds to the empty parking place around which the movement is detected by the motion detection sensor.
5. The apparatus of claim 1, wherein the program instructions when executed are configured to store the captured top view image in storage.
6. The apparatus of claim 5, wherein the program instructions when executed are configured to delete the stored top view image when a lifetime of the stored top view image exceeds a predetermined amount of time.
7. The apparatus of claim 5, wherein the program instructions when executed are to delete the stored top view image when the vehicle travels a second predetermined distance.
8. A method for providing a top view image of a parking space, the method comprising: measuring, by a steering angle sensor, a steering angle depending on travel of a vehicle; generating, by a top view image generator, the top view image of the parking space depending on travel of the vehicle; displaying, by a display, the generated top view image; capturing, by a controller, the top view image; and generating, by the controller, a panorama top view image by connecting the generated top view image and the captured top view image based on the measured steering angle, wherein the generating of the panorama top view image includes: removing an area of overlap between the generated top view image and the captured top view image caused by the steering angle, applying a first shading to a captured previous top view image when the steering angle is less than predetermined degrees and a distance from the vehicle is less than a first predetermined distance; and applying a second shading to the captured previous top view image when the steering angle is more than the predetermined degrees and the distance from the vehicle is more than the first predetermined distance.
9. The method of claim 8, wherein the method further comprises: displaying, by the display, the generated panorama top view image.
10. The method of claim 9, wherein the method further comprises: detecting, by a motion detection sensor, a movement around an empty parking place; and displaying, by the display, a warning icon at a position on the panorama top view image that corresponds to the empty parking place around which the movement is detected.
11. The method of claim 8, wherein the method further comprises: storing, by a storage, the captured top view image; and deleting, by the controller, the stored top view image when a lifetime of the stored top view image exceeds a predetermined amount of time.
12. The method of claim 8, wherein the method further comprises: storing, by a storage, the captured top view image; and deleting, by the controller, the stored top view image when the vehicle travels a second predetermined distance.
Description
DRAWINGS
(1) In order that the disclosure may be well understood, there will now be described various forms thereof, given by way of example, reference being made to the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14) The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.
DETAILED DESCRIPTION
(15) The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.
(16) Hereinafter, some forms of the present disclosure will be described in detail with reference to the exemplary drawings. In adding the reference numerals to the components of each drawing, it should be noted that the identical or equivalent component is designated by the identical numeral even when they are displayed on other drawings. Further, in describing some forms of the present disclosure, a detailed description of well-known features or functions will be ruled out in order not to unnecessarily obscure the gist of the present disclosure.
(17) In describing some forms of the present disclosure, terms such as first, second, “A”, “B”, (a), (b), and the like may be used. These terms are merely intended to distinguish one component from another component, and the terms do not limit the nature, sequence or order of the components. Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meanings as those generally understood by those skilled in the art to which the present disclosure pertains. Such terms as those defined in a generally used dictionary are to be interpreted as having meanings equal to the contextual meanings in the relevant field of art, and are not to be interpreted as having ideal or excessively formal meanings unless clearly defined as having such in the present application.
(18)
(19) As illustrated in
(20) The components will be described below in detail. The input device 10 may receive, from a user, an input of a command to display a top view image of the current parking space or an input of a command to display a panorama top view image.
(21) The input device 10 may receive the user's request in various ways such as a touch on a screen, a button input, and the like. The input device 10 may receive a swipe for moving the screen, as the touch on the screen.
(22) The top view image generator 20 may be equipped in a vehicle and may generate a top view image of a parking space using a two-dimensional image of the parking space that is taken by a plurality of cameras. Hereinafter, a detailed configuration of the top view image generator 20 will be described with reference to
(23)
(24) As illustrated in
(25) The camera 110 includes a left camera 112 attached to a lower end of a left side mirror of the vehicle, a right camera 114 attached to a lower end of a right side mirror of the vehicle, a front camera 116 attached to the front of the vehicle, and a rear camera 118 attached to the rear of the vehicle.
(26) The left camera 112 takes an image of the ground on a left side of the vehicle and generates a left image I.sub.left on a frame-by-frame basis, and the right camera 14 takes an image of the ground on a right side of the vehicle and generates a right image I.sub.right on a frame-by-frame basis.
(27) The front camera 116 takes an image of the ground ahead of the vehicle and generates a front image I.sub.front on a frame-by-frame basis, and the rear camera 118 takes an image of the ground behind the vehicle and generates a rear image I.sub.back on a frame-by-frame basis.
(28) The ground feature point detector 120 detects a ground feature point P.sub.ground and a translation vector of the ground feature point P.sub.ground on a frame-by-frame basis from at least one image from which the ground feature point P.sub.ground is detectable, among the left image I.sub.left provided from the left camera 112, the right image I.sub.right provided from the right camera 114, the front image I.sub.front provided from the front camera 116, and the rear image I.sub.back provided from the rear camera 118. Here, the image from which the ground feature point P.sub.ground is detectable may include the left image or the right image.
(29) An optical flow based feature point extraction algorithm, such as the block matching method, the Horn-Schunck algorithm, the Lucas-Kanade algorithm, the Gunnar Farneback's algorithm, or the like, may be used as a method for detecting the ground feature point P.sub.ground and the translation vector of the ground feature point P.sub.ground from the left image or the right image. The algorithms are well-known technologies, and therefore detailed descriptions thereabout will be omitted.
(30) The first attitude angle estimation device 130 estimates first attitude angles of the left camera 112 and the right camera 114, based on the ground feature point P.sub.ground and the translation vector of the ground feature point P.sub.ground that are input from the ground feature point detector 120, and generates a transformation matrix [M.sub.1] that represents the estimated first attitude angles in a matrix form.
(31) The first rotational transformation device 140 generates a first top view image by performing rotational transformation of the left image I.sub.left and the right image I.sub.right, based on the transformation matrix [M.sub.1] that is input from the first attitude angle estimation device 130. Here, the first top view image includes a left top view image TI.sub.LEFT obtained by performing rotational transformation of the left image I.sub.left using the transformation matrix [M.sub.1] and a right top view image TI.sub.right obtained by performing rotational transformation of the right image I.sub.right using the transformation matrix [M.sub.1].
(32) The line detector 150 detects a first parking line P.sub.line1 from the left top view image TI.sub.LEFT and the right top view image TI.sub.RIGHT that are input from the first rotational transformation device 140. Here, the first parking line P.sub.line1 includes a left parking line detected from the left top view image TI.sub.LEFT and a right parking line detected from the right top view image TI.sub.RIGHT.
(33) A line detection algorithm may be used as a method for detecting the left parking line and the right parking line from the left top view image TI.sub.LEFT and the right top view image TI.sub.RIGHT. However, the present disclosure is not characterized by the line detection algorithm, and therefore detailed description thereabout will be omitted. Furthermore, a technology for obtaining not only a line but also line-related information, such as the thickness and direction of the line, through the line detection algorithm is well known, and therefore detailed description thereabout will be omitted.
(34) The line detector 150 detects a second parking line P.sub.line2, which is connected to the first parking line P.sub.line1, from the front image I.sub.front and the rear image I.sub.back that are input from the camera 110. Here, the second parking line P.sub.line2 includes an upper parking line detected from the front image I.sub.front and a lower parking line detected from the rear image I.sub.back.
(35) The second attitude angle estimation device 160 estimates second attitude angles of the front camera 116 and the rear camera 118, based on a correspondence relation between a feature pattern of the first parking line P.sub.line1 and a feature pattern of the second parking line P.sub.line2 that are input from the line detector 150, and generates a transformation matrix [M.sub.2] that represents the estimated second attitude angles in a matrix form.
(36) The second rotational transformation device 170 generates a second top view image by performing rotational transformation of the front image I.sub.front and the rear image I.sub.back using the transformation matrix [M.sub.2] that is input from the second attitude angle estimation device 160. Here, the second top view image includes a front top view image TI.sub.FRONT obtained by performing rotational transformation of the front image I.sub.front using the transformation matrix [M.sub.2] and a rear top view image TI.sub.BACK obtained by performing rotational transformation of the rear image I.sub.back using the transformation matrix [M.sub.2].
(37) The image synthesizing device 180 generates the final top view image by synthesizing the first top view image TI.sub.LEFT and TI.sub.RIGHT input from the first rotational transformation device 140 and the second top view image TI.sub.FRONT and TI.sub.BACK input from the second rotational transformation device 170.
(38) The above-described configuration of the top view image generator 20 is merely illustrative, and the present disclosure is not characterized by the technology for generating the top view image. Accordingly, any well-known technology may be used.
(39) The display 30 displays a top view image of a parking space that is generated by the top view image generator 20. At this time, the top view image of the parking space may further include a warning icon.
(40) The display 30 may display a panorama top view image in which the current top view image and the previous top view image of a parking space are connected.
(41) The display 30 may include at least one of a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFF LCD), an organic light-emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and/or an e-ink display.
(42) The storage 40 may store various types of logic, algorithms, and programs that are required in a process of capturing a top view image of a parking space on a path along which the vehicle travels and providing a panorama top view image in which the current top view image and the previous top view image are connected, based on a steering angle depending on the travel of the vehicle.
(43) The storage 40 may store a top view image of a parking space that is captured by the controller 70.
(44) The storage 40 may include at least one type of storage medium among memories of a flash memory type, a hard disk type, a micro type, and a card type (e.g., a secure digital (SD) card or an eXtream digital (XD) card) and memories of a random access memory (RAM) type, a static RAM (SRAM) type, a read-only memory (ROM) type, a programmable ROM (PROM) type, an electrically erasable PROM (EEPROM) type, a magnetic RAM (MRAM) type, a magnetic disk type, and an optical disk type.
(45) The motion detection sensor 50 is mounted on the rear, right, or left side of the vehicle and detects a movement around an empty parking place. That is, the motion detection sensor 50 may detect a vehicle approaching the empty parking place and may detect a vehicle leaving the parking place.
(46) The motion detection sensor 50 may include at least one of an ultrasonic sensor, a rear camera, and/or radar.
(47) The steering angle sensor 60 measures a steering angle depending on travel of the vehicle.
(48) The controller 70 performs overall control to enable the components to normally perform functions thereof. The controller 70 may be implemented in a hardware or software form, or may be implemented in a form in which hardware and software are combined. The controller 70 may preferably be implemented with, but is not limited to, a microprocessor.
(49) The controller 70 may capture a top view image of a parking space on a path along which the vehicle travels and may provide a panorama top view image in which the current top view image and the previous top view image are connected, based on a steering angle depending on the travel of the vehicle. At this time, the controller 70 may operate in conjunction with the parking assistance system of the vehicle to capture the top view image of the parking space on the path along which the vehicle travels.
(50) The controller 70 may control the display 30 to display a top view image of a parking space that is generated by the top view image generator 20. The top view image displayed by the display 30 is a real-time image as illustrated in
(51) The controller 70 may capture the top view image of the parking space that is displayed by the display 30 and may store the captured image in the storage 40. When the lifetime of the top view image stored in the storage 40 exceeds critical time (e.g., five minutes, ten minutes, or the like) or the vehicle moves a critical distance (e.g., 30 m or 50 m), the controller 70 may determine that there is no validity and may delete the top view image stored in the storage 40. That is, the controller 70 may delete top view images that have been stored for the critical time or more among top view images stored in the storage 40.
(52) The controller 70 may generate a panorama top view image as illustrated in
(53)
(54) In
(55) The controller 70 may shade the captured previous top view image to increase visibility. For example, the controller 70 may apply 20% shading to an area 421 as a first step and may apply 50% shading to an area 422 as a second step. The number of areas to which shading is applied and the percentage of the shading may be varied depending on a designer's intention. The controller 70 may display the shaded panorama top view image on the navigation map.
(56) Hereinafter, a process in which the controller 70 generates (synthesizes) a panorama top view image will be described in detail with reference to
(57)
(58) In
(59) The controller 70 may generate a panorama top view image by connecting the previous top view image H.sub.p-1 to the current top view image H.sub.p.
(60) However, although the next top view image H.sub.p-1 is generated after the first top view image H.sub.p-2, the current top view image H.sub.p may overlap the previous top view image H.sub.p-1 as illustrated in
(61) In this case, because the current top view image H.sub.p is a base, a panorama top view image is generated by cutting a portion of the previous top view image H.sub.p-1 that overlaps the current top view image H.sub.p and sequentially connecting the remaining top view image H.sub.p-1 and the first top view image H.sub.p-2 to the current top view image H.sub.p.
(62) The controller 70 may adjust the length of the panorama top view image. That is, the controller 70 may adjust the size of the panorama top view image displayed on the display 30.
(63) The controller 70 may control the display 30 to display the panorama top view image as illustrated in
(64)
(65) As illustrated in
(66) Meanwhile, the controller 70 may additionally generate position information of an empty parking place on a top view image generated by the top view image generator 20.
(67)
(68) As illustrated in
(69) The controller 70 may shade the captured previous top view images to increase visibility. For example, the controller 70 may apply 20% shading to the area 721 as a first step and may apply 50% shading to the area 722 as a second step. The number of areas to which shading is applied and the percentage of the shading may be varied depending on the designer's intention. The controller 70 may display the shaded panorama top view image on the navigation map.
(70) Hereinafter, a process in which the controller 70 generates (synthesizes) a panorama top view image will be described in detail with reference to
(71)
(72) As illustrated in
(73) Accordingly, the controller 70 may generate a panorama top view image by connecting the current top view image H.sub.p and the previous top view images H.sub.p-1 and H.sub.p-2, based on the steering angle measured by the steering angle sensor 60. At this time, the controller 70 may use the panorama image generation technology that is a well-known and common technology.
(74)
(75) In
(76) Accordingly, the position coordinates of the empty parking place may be represented by (x, y, θ). In the case where the empty parking place is located on a left side of the vehicle, the position coordinates represent the position of the upper left corner of the empty parking place, and in the case where the empty parking place is located on a right side of the vehicle, the position coordinates represent the position of the upper right corner of the empty parking place. Furthermore, x is the x-coordinate based on the xc-yc axes, and the unit of the distance is centimeter (cm). y is the y-coordinate based on the xc-yc axes, and the unit of the distance is centimeter (cm). For example, when x is 50, it means that the empty parking place is spaced apart from the xc axis by 50 cm.
(77) In addition, based on a movement around an empty parking place that is located on a rear side, a right side, or a left side of the vehicle and detected by the motion detection sensor 50, the controller 70 may control the display 30 to additionally display a warning icon 1001 on the empty parking place in a panorama top view image. At this time, the top view image displayed by the display 30 is as illustrated in
(78)
(79) As illustrated in
(80) The empty parking place having the warning icon 1001 displayed thereon is a space that a driver needs to identify, and before parking in the empty parking place on which the warning icon 1001 is displayed, the driver necessarily has to identify the empty parking place. When a parking path to the empty parking place having the warning icon 1001 displayed thereon is calculated, automatic parking may be performed, and when the parking path is not calculated, manual parking may be performed.
(81)
(82) The steering angle sensor 60 measures a steering angle depending on travel of a vehicle (1101).
(83) The top view image generator 20 generates a top view image of a parking space depending on the travel of the vehicle (1102).
(84) The display 30 displays the top view image generated by the top view image generator 20 (1103).
(85) The controller 70 captures the top view image displayed by the display 30 (1104).
(86) The controller 70 generates a panorama top view image by connecting the current top view image generated by the top view image generator 20 and the captured previous top view image, based on the steering angle measured by the steering angle sensor 60 (1105). At this time, the controller 70 may apply shading to the captured top view image of the panorama top view image. That is, the controller 70 may divide the captured top view image into a plurality of areas and may differently shade the plurality of areas.
(87) For example, the controller 70 may apply 20% shading to the captured previous top view image as a first step when the steering angle is less than 30 degrees and the distance from the vehicle is less than 10 m, may apply 50% shading to the captured previous top view image as a second step when the steering angle is 30 degrees or more and the distance from the vehicle is more than 10 m and less than 20 m, and may apply 80% shading to the captured previous top view image as a third step when the distance from the vehicle is 20 m or more. At this time, 100% shading represents black by which an object in an image is unable to be identified.
(88)
(89) Referring to
(90) The processor 1100 may be a central processing unit (CPU) or a semiconductor device that processes instructions stored in the memory 1300 and/or the storage 1600. The memory 1300 and the storage 1600 may include various types of volatile or non-volatile storage media. For example, the memory 1300 may include a ROM (Read Only Memory) 1310 and a RAM (Random Access Memory) 1320.
(91) Thus, the operations of the method or the algorithm described in some forms of the present disclosure may be embodied directly in hardware or a software module executed by the processor 1100, or in a combination thereof. The software module may reside on a storage medium (that is, the memory 1300 and/or the storage 1600) such as a RAM, a flash memory, a ROM, an EPROM, an EEPROM, a register, a hard disk, a removable disk, or a CD-ROM. The exemplary storage medium may be coupled to the processor 1100, and the processor 1100 may read information out of the storage medium and may record information in the storage medium. Alternatively, the storage medium may be integrated with the processor 1100. The processor 1100 and the storage medium may reside in an application specific integrated circuit (ASIC). The ASIC may reside within a user terminal. In another case, the processor 1100 and the storage medium may reside in the user terminal as separate components.
(92) The apparatus and method for providing the top view image of the parking space in some forms of the present disclosure captures a top view image of a parking space on a path along which a vehicle travels, and provides a panorama top view image by connecting the current top view image and the previous top view image based on a steering angle depending on the travel of the vehicle, thereby enabling a user to freely select not only an empty parking place on the current top view image but also an empty parking place on the previous top view image.
(93) Therefore, some forms of the present disclosure are provided to explain the spirit and scope of the present disclosure, but not to limit them, so that the spirit and scope of the present disclosure is not limited by some forms of the present disclosure. The scope of the present disclosure should be construed on the basis of the accompanying claims, and all the technical ideas within the scope equivalent to the claims should be included in the scope of the present disclosure.
(94) The description of the disclosure is merely exemplary in nature and, thus, variations that do not depart from the substance of the disclosure are intended to be within the scope of the disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure.