Vehicle and method of controlling the same
11418693 · 2022-08-16
Assignee
Inventors
- Junsik AN (Seoul, KR)
- Hayeon Lee (Gwacheon-si, KR)
- Jongmo Kim (Goyang-si, KR)
- Jinwook Choi (Seoul, KR)
- Minsung Son (Gwacheon-si, KR)
US classification
- 1/1
Cpc classification
H04N23/60 H04N23/60
G01S2015/932 G01S2015/932
G01S15/931 G01S15/931
B60R1/27 B60R1/27
B60R1/00 B60R1/00
B60R2300/806 B60R2300/806
B60R2300/105 B60R2300/105
B60R2300/607 B60R2300/607
G01C21/30 G01C21/30
B60R2300/20 B60R2300/20
H04N5/2628 H04N5/2628
G06V20/586 G06V20/586
H04N7/18 H04N7/18
G01S15/86 G01S15/86
G06V10/147 G06V10/147
H04N23/90 H04N23/90
H04N7/183 H04N7/183
International classification
Abstract
A vehicle includes a camera unit disposed in the vehicle to have a plurality of channels and configured to obtain an image around the vehicle, the camera unit including one or more cameras, a sensing device including an ultrasonic sensor, the sensing device configured to obtain distance information between an object and the vehicle, and a controller configured to match a part of the image around the vehicle with at least one mask, form map information based on the at least one mask and the distance information, determine at least one control point based on the map information, and obtain the image around the vehicle based on a priority of the camera unit corresponding to a surrounding type of the vehicle determined based on the control point.
Claims
1. A vehicle comprising: a display; a camera unit disposed in the vehicle, comprising a plurality of cameras; a sensing device including an ultrasonic sensor, the sensing device configured to obtain distance information between an object and the vehicle; and a controller configured to: identify a free space based on images obtained by the plurality of cameras, based on the free space, determine a parking direction in which the vehicle is to be parked, identify an image corresponding to the parking direction among the images obtained by the plurality of cameras, generate a view image based on the identified image and the remaining image among the images obtained by the plurality of cameras, and control display of the view image, wherein, when the controlling the display of the view image, the controller is configured to: divide a display region of the display into a plurality of regions based on a number of the plurality of cameras, and adjust a boundary of a region in which the identified image is displayed so that the region in which the identified image is displayed is displayed larger than each region in which the remaining images are displayed.
2. The vehicle according to claim 1, wherein the controller is configured to generate map information based on the distance information between the object and the vehicle obtained by the sensing device, determine at least one control point based on the map information, and control driving to avoid the at least one control point, wherein the map information comprises the distance information corresponding to pixels of the images obtained by the plurality of cameras.
3. The vehicle according to claim 1, wherein the controller is configured to convert the images obtained by the plurality of cameras to a vehicle coordinate system to match with a mask.
4. The vehicle according to claim 1, wherein the controller is configured to determine a surrounding type of the vehicle based on the identified free space, wherein the surrounding type further includes a relationship between the object and the vehicle, and a relationship between a road and the vehicle, wherein the controller is configured to determine the surrounding type of the vehicle through learning of the images obtained by the plurality of cameras.
5. The vehicle according to claim 4, wherein, when the surrounding type of the vehicle is longitudinal parking on a side of the vehicle, the controller is configured to assign a high priority to a camera on the side of the vehicle among the plurality of cameras.
6. The vehicle according to claim 4, wherein, when the surrounding type of the vehicle is reverse diagonal parking of the vehicle, the controller is configured to assign a high priority to a camera in front of the vehicle among the plurality of cameras.
7. The vehicle according to claim 4, wherein, when the surrounding type of the vehicle is forward diagonal parking of the vehicle, the controller is configured to assign a high priority to a camera on a side of the vehicle among the plurality of cameras.
8. The vehicle according to claim 4, wherein, when the surrounding type of the vehicle is rear parking of the vehicle, the controller is configured to assign a high priority to a camera behind the vehicle among the plurality of cameras.
9. The vehicle according to claim 1, wherein the controller is configured to assign a priority for the plurality of cameras based on the parking direction, and change the priority in real time in response to driving of the vehicle.
10. The vehicle according to claim 9, wherein the controller is configured form a boundary line on the view image based on priority information of the plurality of cameras and output the boundary line to the display.
11. A method of controlling a vehicle, the method comprising: obtaining, by a camera unit having a plurality of cameras, images around the vehicle; obtaining, by a sensing device, distance information between an object and the vehicle; identifying a free space based on images obtained by the plurality of cameras; determining a parking direction in which the vehicle is to be parked, based on the free space; identifying an image corresponding to the parking direction among the images obtained by the plurality of cameras; generating a view image based on the identified image and the remaining image among the images obtained by the plurality of cameras; and displaying, by the display, the view image, wherein the controlling the display of the view image includes: dividing a display region of the display into a plurality of regions based on a number of the plurality of cameras, and adjusting a boundary of a region in which the identified image is displayed so that the region in which the identified image is displayed is displayed larger than each region in which the remaining images are displayed.
12. The method according to claim 11, further comprising: generating map information based on the distance information between the object and the vehicle obtained by the sensing device, wherein the map information comprises the distance information corresponding to pixels of the images obtained by the plurality of cameras.
13. The method according to claim 11, further comprising converting the images obtained by the plurality of cameras to a vehicle coordinate system to match with a mask.
14. The method according to claim 11, further comprising determining a surrounding type of the vehicle based on the identified free space, wherein the surrounding type further includes a relationship between the object and the vehicle, and a relationship between a road and the vehicle, wherein obtaining the image around the vehicle comprises determining the surrounding type of the vehicle through learning of the images obtained by the plurality of cameras.
15. The method according to claim 14, wherein obtaining the images obtained by the plurality of cameras comprises giving a high priority to a camera on a side of the vehicle among the plurality of cameras when the surrounding type of the vehicle is longitudinal parking on the side of the vehicle.
16. The method according to claim 14, wherein obtaining the image around the vehicle comprises giving a high priority to a camera in front of the vehicle among the plurality of cameras when the surrounding type of the vehicle is reverse diagonal parking of the vehicle.
17. The method according to claim 14, wherein obtaining the image around the vehicle comprises giving a high priority to a camera on a side of the vehicle among the plurality of cameras when the surrounding type of the vehicle is forward diagonal parking of the vehicle.
18. The method according to claim 14, wherein obtaining the image around the vehicle comprises giving a high priority to a camera behind the vehicle among the plurality of cameras when the surrounding type of the vehicle is rear parking of the vehicle.
19. The method according to claim 11, further comprising assign a priority for the plurality of cameras based on the parking direction, and changing the priority in real time in response to driving of the vehicle.
20. The method according to claim 19, further comprising: forming a boundary line on the view image based on the priority of the plurality of cameras; and outputting the boundary line to the display.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) These and/or other aspects of the disclosure will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
(9) Like reference numerals refer to like elements throughout the specification. Not all elements of the embodiments of the disclosure will be described, and the description of what are commonly known in the art or what overlap each other in the embodiments will be omitted. The terms as used throughout the specification, such as “˜ part,” “˜ module,” “˜ member,” “˜ block,” etc., may be implemented in software and/or hardware, and a plurality of “˜ parts,” “˜ modules,” “˜ members,” or “˜ blocks” may be implemented in a single element, or a single “˜ part,” “˜ module,” “˜ member,” or “˜ block” may include a plurality of elements.
(10) It will be further understood that the term “connect” and its derivatives refer both to direct and indirect connection, and the indirect connection includes a connection over a wireless communication network. The terms “include (or including)” and “comprise (or comprising)” are inclusive or open-ended and do not exclude additional, unrecited elements or method steps, unless otherwise mentioned. It will be further understood that the term “member” and its derivatives refer both to when a member is in contact with another member and when another member exists between the two members. It will be understood that, although the terms first, second, third, etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another region, layer or section.
(11) It is to be understood that the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Reference numerals used for method steps are merely used for convenience of explanation, but not to limit an order of the steps. Thus, unless the context clearly dictates otherwise, the written order may be practiced otherwise.
(12) Hereinafter, an operation principle and embodiments of the disclosure will be described with reference to accompanying drawings.
(13)
(14) Referring to
(15) The camera unit 300 can include one or more cameras 300. The camera unit 300 has a plurality of channels and may obtain images around the vehicle 1. Hereinafter, the term “camera 300” may refer to the camera unit or an individual camera or cameras of the camera unit.
(16) The camera(s) 300 installed in the vehicle 1 may include a charge-coupled device (CCD) camera or a CMOS color image sensor. Here, both the CCD and the CMOS refer to a sensor that converts light received through the lens of the camera 300 into an electric signal and stores the electric signal.
(17) The sensing device 100 may include an ultrasonic sensor.
(18) The ultrasonic sensor may employ a method of transmitting ultrasonic waves and detecting a distance to an obstacle using ultrasonic waves reflected on the obstacle.
(19) The sensing device 100 may obtain distance information of the vehicle 1 and the obstacle provided around the vehicle 1.
(20) The display 400 may be provided as an instrument panel provided in the vehicle 1 or a display device provided in a center fascia.
(21) The display 400 may include cathode ray tubes (CRTs), a digital light processing (DLP) panel, a plasma display panel (PDP), a liquid crystal display (LCD) panel, an electro luminescence (EL) panel, an electrophoretic display (EPD) panel, an electrochromic display (ECD) panel, a light emitting diode (LED) panel or an organic light emitting diode (OLED) panel, but is not limited thereto.
(22) The controller 200 may match a part of an image around the vehicle 1 with at least one mask. That is, the obstacle, a floor, or the like displayed in the image may be matched to a corresponding mask.
(23) The controller 200 may form map information based on the at least one mask and the distance information.
(24) The map information may refer to information including the distance information between the vehicle 1 and the surrounding obstacle.
(25) The controller 200 may obtain the image around the vehicle 1 based on the priority of the camera 300 corresponding to the surrounding type of the vehicle 1 determined based on the map information.
(26) The surrounding type may refer to a relationship between the vehicle 1 and the obstacle, and a relationship between the vehicle 1 and a road.
(27) The priority may refer to information related to a recognition area of the camera 300.
(28) The map information may include the distance information corresponding to pixels of the image around the vehicle 1. That is, the map information may be provided as information matching the distance between the vehicle 1 and the obstacle and the pixels of the image around the vehicle 1.
(29) The controller 200 may convert the image around the vehicle 1 into a vehicle coordinate system to correspond to the at least one mask.
(30) That is, the controller 200 may obtain the image around the vehicle 1 with a coordinate system centered on the camera 300, but the controller 200 may convert the image around the vehicle 1 into the coordinate system of the vehicle itself to form the map information.
(31) The controller 200 may determine the surrounding type of the vehicle 1 through learning of the image around the vehicle 1.
(32) The learning performed by the controller 200 may be performed through deep learning.
(33) The deep learning is a field of machine learning, and may refer to a form of expressing data as a vector or a graph, which can be processed by a computer and building a model for learning data.
(34) The model of deep learning may be formed based on a neural network, and in particular, the model of deep learning may be formed by building up the model by stacking multiple layers of neural networks.
(35) When performing longitudinal parking on a side of the vehicle 1, the controller 200 may assign the priority to the channel on the side of the vehicle 1 among the plurality of channels.
(36) When performing reverse diagonal parking of the vehicle 1, the controller 200 may assign the priority to the channel in front of the vehicle 1 among the plurality of channels.
(37) When performing forward diagonal parking of the vehicle 1, the controller 200 may assign the priority to the channel on the side of the vehicle 1 among the plurality of channels.
(38) When performing rear parking of the vehicle 1, the controller 200 may assign the priority to the channel behind the vehicle 1 among the plurality of channels.
(39) The controller 200 may change the priority in real time in response to driving of the vehicle 1.
(40) The priority of the camera 300 changed in response to parking of the above-described vehicle 1 will be described in detail below.
(41) The controller 200 may form a top view image based on the map information of the vehicle 1, and may form a boundary line in the top view based on priority information to output it to the display 400.
(42) The boundary formed in the top view may refer to the recognition area of each of the cameras 300.
(43) The controller 200 may be implemented with a memory storing an algorithm to control operation of the components in the vehicle 1 or data about a program that implements the algorithm, and a processor carrying out the aforementioned operation using the data stored in the memory. The memory and the processor may be implemented in separate chips. Alternatively, the memory and the processor may be implemented in a single chip.
(44) At least one component may be added or deleted corresponding to the performance of the components of the vehicle 1 illustrated in
(45) In the meantime, each of the components illustrated in
(46)
(47) Referring to
(48) The controller 200 may match the mask with the obtained image around the vehicle 1.
(49) Meanwhile, in this process, the controller 200 may convert the coordinates of each of the cameras 300 to the coordinates of the vehicle 1. Particularly, a mask M2-1 may be matched to the obstacle such as a vehicle illustrated in the image around the vehicle 1.
(50) In addition, an empty space on the road may be matched with a different mask M2-2.
(51) On the other hand, the floor or ground may be matched with another mask M2-3. The controller 200 may match the mask with the distance information corresponding to each of the pixels.
(52) The controller 200 may determine map information F2 based on this operation.
(53) Meanwhile, in
(54) Meanwhile, when the map information of each of the cameras 300 determined as described above is collected, map information F3 illustrated in
(55) The map information may be determined based on a recognition result of the camera 300 and the ultrasonic sensor and a distance coordinate system. The controller 200 may provide a method for providing a free space and a control point for parking control using distance map data for each of the pixels of each of the cameras 300 based on the map information.
(56) The above-described operation may form the map information based on the image of the camera 300 suitable for spatial recognition according to a type of the parking space around the vehicle 1.
(57) The operation minimizes an occlusion of the image and may quickly determine whether to recognize an occupied space.
(58) In addition, the above-described map information may generate an optimal distance map form for recognition by comparing the recognition results of the cameras 300 of different locations.
(59) In addition, it is possible to optimize a control performance by generating the coordinates of an entry point necessary for a subject control.
(60) Meanwhile, the distance map illustrated in
(61)
(62) Referring to
(63)
(64) Referring to
(65) Also, the controller 200 may control the vehicle 1 based on two control points P51 and P52. Since the two control points are located on the side of the vehicle, the controller 200 may assign a high priority to the side camera 300. The controller 200 may perform side parking by widening a width of an area Ca51 recognized by the side camera 300.
(66) Referring to
(67) Even in this case, the controller 200 may control the vehicle 1 based on two control points P61 and P62. Although the two control points are located on the side of the vehicle 1, the two control points are located at positions of the control points different from those of
(68) The controller 200 may assign a high priority to the front camera 300. The controller 200 may park in an area Ca6i recognized by the front camera 300.
(69) Referring to
(70) Even in this case, the controller 200 may control the vehicle 1 based on two control points P71 and P72. Although the two control points are located on the side of the vehicle 1, the two control points are located at positions of the control points different from those of
(71) The controller 200 may assign the high priority to the side camera 300. The controller 200 may park in an area Ca71 recognized by the side camera 300.
(72) Referring to
(73) The controller 200 may control the vehicle 1 based on two control points P81 and P82. Since the two control points are located at the rear of the vehicle, the controller 200 may assign the high priority to the rear camera 300. The controller 200 may perform rear parking by widening the width of the area recognized by the rear camera 300.
(74) On the other hand, in the driving of the vehicle 1, surrounding situations may change in real time, and the position of the vehicle 1 and the control point may also change in real time. Accordingly, the controller 200 may change the priority of the camera 300 by considering a positional relationship between the vehicle 1 and the control point in real time.
(75)
(76) Referring to
(77) For example, in the case of
(78) On the other hand, when the vehicle 1 performs rear parking as illustrated in
(79) Meanwhile, the above-described operations are only one embodiment for describing the operation of the disclosure, and there is no limitation in the operation of forming the map information according to the distance and changing the priority or the recognition area of the cameras 300 accordingly.
(80)
(81) Referring to
(82) In addition, the vehicle 1 may form the map information based on the image around the vehicle 1 and the distance information (1002).
(83) In addition, the map information may include the distance information of each obstacle, and the controller 200 may determine the control point based on the distance information of each obstacle (1003).
(84) Also, the controller 200 may determine the surrounding type based on the positional relationship between the vehicle 1 and the control point (1004).
(85) In response to this type, the controller 200 may assign the priority to each of the cameras 300 and control the vehicle 1 based on the recognition area of the camera 300 assigned the priority (1005).
(86) According to the embodiments of the disclosure, the vehicle 1 and the method of controlling the vehicle 1 may change the recognition area of the camera according to the type of parking, thereby enabling efficient autonomous parking.
(87) The disclosed embodiments may be implemented in the form of a recording medium storing computer-executable instructions that are executable by a processor. The instructions may be stored in the form of a program code, and when executed by a processor, the instructions may generate a program module to perform operations of the disclosed embodiments. The recording medium may be implemented as a non-transitory computer-readable recording medium.
(88) The non-transitory computer-readable recording medium may include all kinds of recording media storing commands that can be interpreted by a computer. For example, the non-transitory computer-readable recording medium may be, for example, ROM, RAM, a magnetic tape, a magnetic disc, flash memory, an optical data storage device, etc.
(89) Embodiments of the disclosure have thus far been described with reference to the accompanying drawings. It should be obvious to a person of ordinary skill in the art that the disclosure may be practiced in other forms than the embodiments as described above without changing the technical idea or essential features of the disclosure. The above embodiments are only by way of example, and should not be interpreted in a limited sense.