Method and apparatus for detecting a pedestrian by a vehicle during night driving
10391937 ยท 2019-08-27
Assignee
Inventors
- Young Chul Oh (Seongnam Gyeonggi-do, KR)
- Myung Seon Heo (Seoul, KR)
- Wan Jae Lee (Suwon Gyeonggi-do, KR)
- Byung Yong You (Suwon Gyeonggi-do, KR)
Cpc classification
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
H04N23/11
ELECTRICITY
B60K2360/179
PERFORMING OPERATIONS; TRANSPORTING
B60R1/30
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/303
PERFORMING OPERATIONS; TRANSPORTING
H04N7/181
ELECTRICITY
G06V20/58
PHYSICS
B60R2300/307
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/102
PERFORMING OPERATIONS; TRANSPORTING
G06V40/103
PHYSICS
H04N7/188
ELECTRICITY
B60R1/23
PERFORMING OPERATIONS; TRANSPORTING
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
B60K35/28
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/8053
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
H04N7/18
ELECTRICITY
Abstract
A method and an apparatus for detecting a pedestrian by a vehicle during night driving are provided, in which the apparatus includes: a first camera configured to take a first image including color information of a vicinity of the vehicle during night driving; a second camera configured to take a second image including thermal distribution information of the vicinity of the vehicle; a pedestrian detector configured to detect a non-pedestrian area by using the color information from the first image and detect a pedestrian area by excluding the non-pedestrian area from the second image; and a display configured to match and display the pedestrian area on the second image.
Claims
1. An apparatus for detecting a pedestrian by a vehicle during night driving, comprising: a first camera being a charge coupled device (CCD) camera configured to take a first image that is a color image including color information of a vicinity of the vehicle during night driving; a second camera being an infrared camera configured to take a second image that is a far-infrared image including thermal distribution information of the vicinity of the vehicle; an image matcher configured to match the first image and the second image; a pedestrian detector configured to detect a non-pedestrian area by using an area in which a color value is more than a reference value from the first image including the color information of the vicinity of the vehicle during night driving, and detect a pedestrian area in the second image excluding the detected non-pedestrian area and including the thermal distribution information of the vicinity of the vehicle by using a feature detection and learning algorithm; and a display configured to display the first image including the pedestrian area of the second image, wherein the image matcher calculates a real coordinate of an object from a coordinate of the first image by using inside and outside parameters of the first camera and a real distance between a virtual starting point and the object, wherein the image matcher calculates a corresponding coordinate of the second image corresponding to the coordinate of the first image by using the real coordinate of the object, inside and outside parameters of the second camera, and the real distance, and wherein the virtual starting point is a central point between points representing locations of the first and second cameras, in which a vertical line from the points of the first camera and the second camera and planes of the first image and the second image meet.
2. The apparatus of claim 1, wherein the pedestrian detector comprises the image matcher configured to match the first image and the second image, a non-pedestrian area detector configured to detect the area in which the color value is more than the reference value as the non-pedestrian area from the first image based on the color information of the first image, an attention area extractor configured to extract an attention area by excluding the non-pedestrian area from the second image, and a pedestrian area extractor configured to extract the pedestrian area from the attention area.
3. A method for detecting a pedestrian by a vehicle during night driving, comprising the steps of: taking a first image that is a color image and a second image that is a far-infrared image of a vicinity of the vehicle through a first camera and a second camera, respectively, during night driving, the first camera being a charge coupled device (CCD) camera and the second camera being a far-infrared camera; matching the first image and the second image by an image matcher; detecting a non-pedestrian area by using an area in which a color value is more than a reference value of the first image including color information of the vicinity of the vehicle during night driving; excluding the non-pedestrian area from the second image including thermal distribution information of the vicinity of the vehicle; detecting a pedestrian area in the second image excluding the detected non-pedestrian area and including the thermal distribution information of the vicinity of the vehicle by using a feature detection and learning algorithm; displaying the first image including the pedestrian area of the second image, wherein the step of matching the first image and the second image further comprises steps of: calculating a real coordinate of an object from a coordinate of the first image by using inside and outside parameters of the first camera and a real distance between a virtual starting point and the object; and calculating a corresponding coordinate of the second image corresponding to the coordinate of the first image by using a real coordinate of the object, inside and outside parameters of the second camera, and the real distance, wherein the virtual starting point is a central point between points representing locations of the first and second cameras, in which a vertical line from starting points of the first camera and the second camera and planes of the first image and the second image meet.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The above and other objects, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(7) Hereinafter, the embodiments of the present invention will be described in detail with reference to the drawings.
(8) It is understood that the term vehicle or vehicular or other similar term as used herein is inclusive of motor vehicles in general such as passenger automobiles including sports utility vehicles (SUV), buses, trucks, various commercial vehicles, watercraft including a variety of boats and ships, aircraft, and the like, and includes hybrid vehicles, electric vehicles, plug-in hybrid electric vehicles, hydrogen-powered vehicles and other alternative fuel vehicles (e.g. fuels derived from resources other than petroleum). As referred to herein, a hybrid vehicle is a vehicle that has two or more sources of power, for example both gasoline-powered and electric-powered vehicles.
(9) The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms a, an and the are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms comprises and/or comprising, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term and/or includes any and all combinations of one or more of the associated listed items.
(10) Further, the control logic of the present invention may be embodied as non-transitory computer readable media on a computer readable medium containing executable program instructions executed by a processor, controller or the like. Examples of computer readable media include, but are not limited to, ROM, RAM, compact disc (CD)-ROMs, magnetic tapes, floppy disks, flash drives, smart cards and optical data storage devices. The computer readable medium can also be distributed in network coupled computer systems so that the computer readable media is stored and executed in a distributed fashion, e.g., by a telematics server or a Controller Area Network (CAN).
(11)
(12) Referring to
(13) The first camera 10 takes a first image including color information of the vicinity of a vehicle during night driving. The first camera 10 can be implemented by a CCD (Charge Coupled Device) camera and the like.
(14) The second camera 20 takes a second image including thermal distribution information of the vicinity of the vehicle. The second camera 20 can be implemented by an infrared camera, a far-infrared camera, a near-infrared camera, and the like.
(15) The first camera 10 and the second camera 20 are mounted in pairs on at least one of the front, rear, and side of the vehicle. The first camera 10 and second camera 20 are arranged in two different points of the same plane (for example, front). In particular, the first camera 10 and the second camera 20 obtain the image of the same scene from the different points from each other.
(16) The pedestrian detector 30 detects a non-pedestrian area from the first image and detects the pedestrian area by excluding the non-pedestrian area from the second image. This pedestrian detector 30 can be implemented by an image processor.
(17) The pedestrian detector 30 includes an image matcher 31, a non-pedestrian area detector 32, an attention area extractor 33, and a pedestrian area extractor 34.
(18) The image matcher 31 matches the first image and the second image by using viewpoint change technology. In other words, the image matcher 31 mutually matches the coordinates of the first image and the second image obtained from the different points from each other.
(19) The process which the image matcher 31 calculates the corresponding coordinates between the first image and the second image will be described with reference to
(20) Referring to
(21) If the image coordinate p of the first image is inputted, in order to calculate the image coordinate of the corresponding p in the second image, if the image coordinate p(x, y) of the first image is inputted, the image matcher 31 calculates the real coordinate (X, Y, Z) of the point P by using the inside and outside parameter of the first camera 10 and the real distance Z.
(22) The image matcher 31 calculates the image coordinate p(u, v) of the second image corresponding to the image coordinate p(x, y) of the first image by using the real coordinate (X, Y, Z) of the point P, the inside and outside parameter of the second camera 10, and the real distance Z.
(23) The non-pedestrian detector 32 detects the non-pedestrian area from the first image by using the color information (hue information) of the first image. At this time, the non-pedestrian area detector 32 detects the area which the color value is more than the reference value from the first image as the non-pedestrian area.
(24) The attention area extractor 33 excludes the non-pedestrian area detected by the non-pedestrian area detector 32 from the second image. In particular, the second image excluding the non-pedestrian area becomes the attention area capable of detecting the pedestrian area.
(25) The pedestrian area extractor 34 extracts the pedestrian area from the second image excluding the non-pedestrian area. At this time, the pedestrian area extractor 34 extracts the pedestrian area from the second image by using a feature detection and learning algorithm (a pedestrian detection algorithm).
(26) The display 40 matches and displays the pedestrian area extracted by the pedestrian are extractor 34 on the second image. This display 40 can be implemented by LCD (Liquid Crystal Display), LED (Light Emitting Diode) display, HUD (Head-Up Display), a transparent display and the like.
(27)
(28) First, the pedestrian detector 30 of a pedestrian detection apparatus takes the first image and the second image of the vicinity (for example, front, rear, or side) of a vehicle through the first camera 10 and the second camera 20 during night driving (S11). At this time, the first image and the second image is the images taken in the different points from each other, the first image (color image) includes the color information for the vicinity of a vehicle, and the second image includes the thermal distribution information for the vicinity of a vehicle.
(29) The image matcher 31 of the pedestrian detector 30 matches the image coordinate of the second image corresponding to the image coordinate of the first image (S12). In other words, the image matcher 31 calculates the real coordinate of the object from the coordinate of the first image by using the inside and the outside parameter of the first camera 10 and the real distance between the virtual starting point and the object, and calculates the corresponding coordinate of the second image corresponding to the coordinate of the first image by using the real coordinate of the object, the inside and the outside parameter of the second camera 20 and the real distance.
(30) The non-pedestrian detector 32 detects the non-pedestrian area form the first image by using the color information of the first image (S13). At this time, the non-pedestrian area detector 32 detects the area which the color value is more than the reference value as the non-pedestrian area in the first image. In particular, the non-pedestrian detector 32 detects the bright area which is more than the reference as compared to the vicinity as the non-pedestrian area as shown in
(31) The attention area extractor 33 excludes the non-pedestrian area from the second image (S14). For example, the attention area extractor 33 detects the area of the far-infrared area corresponding to the non-pedestrian area detected form the color image and deletes the area as shown in
(32) The pedestrian area extractor 34 detects the pedestrian area form the second image excluding the non-pedestrian area (S15). Here, the pedestrian area extractor 34 detects the pedestrian area in the far-infrared area excluding the non-pedestrian area by using a feature detection and learning algorithm (pedestrian detection algorithm) as shown in
(33) The display 40 matches and displays the pedestrian area detected by the pedestrian detector 30 on the first image (S16). For example, the display 40 matches and displays the pedestrian area detected from the far-infrared image on the color image as shown in