DETERMINATION DEVICE, DETERMINATION METHOD, AND STORAGE MEDIUM STORING PROGRAM
20210406563 · 2021-12-30
Assignee
Inventors
- Kenki UEDA (Edogawa-ku, JP)
- Ryosuke TACHIBANA (Shinagawa-ku, JP)
- Jun HATTORI (Chofu-shi, JP)
- Takashi KITAGAWA (Kodaira-shi, JP)
- Hirofumi OHASHI (Chiyoda-ku, JP)
- Toshihiro YASUDA (Osaka-shi, JP)
- Tetsuo TAKEMOTO (Edogawa-ku, JP)
Cpc classification
G06T7/246
PHYSICS
G06V40/103
PHYSICS
G06V20/58
PHYSICS
B62D15/021
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
A determination device includes a processor. The processor is configured to detect an object in an image captured by an image capture section provided at a vehicle, generate a determination area in accordance with a direction of movement of the vehicle based on travel information of the vehicle and based on a position and a speed of the object, and determine danger to be present in a case in which the object is present in the determination area.
Claims
1. A determination device, comprising a processor, the processor being configured to: detect an object in an image captured by an image capture section provided at a vehicle; generate a determination area in accordance with a direction of movement of the vehicle based on travel information of the vehicle and based on a position and a speed of the object; and determine danger to be present in a case in which the object is present in the determination area.
2. The determination device of claim 1, wherein the processor is further configured to: compute a danger level with respect to the object present in the determination area; and determine danger to be present in a case in which the computed danger level exceeds a threshold.
3. The determination device of claim 2, wherein the processor is further configured to perform determination employing the threshold, and the threshold is lowered in conjunction with an increase in a steering angle of a steering wheel acquired from the travel information.
4. The determination device of claim 2, wherein: the processor is further configured to generate a plurality of determination areas based on the travel information and based on the position and the speed of the object; and the processor is further configured to compute the danger level for each of the determination areas and to determine danger to be present in a case in which the danger level exceeds a threshold in any one of the determination areas.
5. The determination device of claim 4, wherein the determination areas include: a first area for which the processor performs determination without considering the speed of the object; a second area for which the processor performs determination in consideration of a vehicle front-rear direction speed of the object; and a third area for which the processor performs determination in consideration of a vehicle left-right direction speed of the object.
6. The determination device of claim 1, wherein, in a case in which the processor detects a pedestrian instead of another vehicle, the processor increases a width of the determination area in comparison to a case in which the processor detects another vehicle.
7. The determination device of claim 1, wherein, in a case in which actuation information for an indicator light of the vehicle has been acquired, the processor increases a width of the determination area in comparison to a case in which the indicator light actuation information has not been acquired.
8. The determination device of claim 1, wherein the processor is further configured to maintain a determination that danger is present at a current timing in a case in which danger has been determined to be present based on the captured image in a prescribed number of preceding frames.
9. A determination method in which a computer executes processing, the processing comprising: detection processing to detect an object in an image captured by an image capture section provided at a vehicle; generation processing to generate a determination area in accordance with a direction of movement of the vehicle based on travel information of the vehicle and based on a position and a speed of the object; and determination processing to determine danger to be present in a case in which the object is present in the determination area.
10. A non-transitory storage medium storing a program executable by a computer to perform processing, the processing comprising: detection processing to detect an object in an image captured by an image capture section provided at a vehicle; generation processing to generate a determination area in accordance with a direction of movement of the vehicle based on travel information of the vehicle and based on a position and a speed of the object; and determination processing to determine danger to be present in a case in which the object is present in the determination area.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] An exemplary embodiment of the present disclosure will be described in detail based on the following figures, wherein:
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
DETAILED DESCRIPTION
[0030] As illustrated in
[0031] The ECUs 22 are provided as control devices that control respective sections of the vehicle 12 and also perform external communication. As illustrated in
[0032] The steering ECU 22A has a function of controlling electric power steering. The steering ECU 22A is input with a signal from a non-illustrated steering angle sensor connected to a steering wheel 14 (see
[0033] As illustrated in
[0034] The reporting device 25 is provided on an upper face of a dashboard 18. As illustrated in
[0035] The controller 20 is configured including a central processing unit (CPU) 20A, read only memory (ROM) 20B, random access memory (RAM) 20C, storage 20D, a communication interface (I/F) 20E, and an input/output I/F 20F. The CPU 20A, the ROM 20B, the RAM 20C, the storage 20D, the communication I/F 20E, and the input/output I/F 20F are connected together so as to be capable of communicating with each other through an internal bus 20G.
[0036] The CPU 20A is a central processing unit that executes various programs and controls various sections. Namely, the CPU 20A reads a program from the ROM 20B, and executes the program using the RAM 20C as a workspace.
[0037] The ROM 20B stores various programs and various data. As illustrated in
[0038] The processing program 100 is a program for performing determination processing and report processing, described later. The vehicle data 110 is data in which a track width between the tires of the vehicle 12, an installation height of the camera 24, and the like are stored. The determination log 120 is data in which determination results of the determination processing are stored. The determination log 120 may be temporarily stored in the RAM 20C.
[0039] As illustrated in
[0040] The storage 20D is configured by a hard disk drive (HDD) or a solid state drive (SSD), and stores various programs and various data. As illustrated in
[0041] The communication I/F 20E is an interface for connecting to the respective ECUs 22. This interface employs a CAN communication protocol. The communication I/F 20E is connected to the respective ECUs 22 through an external bus 20H.
[0042] The input/output I/F 20F is an interface for communicating with the camera 24 installed to the vehicle 12, as well as with the monitor 26 and the speaker 28 of the reporting device 25.
[0043] As illustrated in
[0044] The setting section 200 has a function of setting a track width of the vehicle 12 and an installation height of the camera 24. The setting section 200 is operated by an operator at the time of installation of the controller 20, the camera 24, and the reporting device 25 so as to set the track width of the vehicle 12 and the installation height of the camera 24. The data thus set is stored as the vehicle data 110.
[0045] The image acquisition section 210 has a function of acquiring captured images captured by the camera 24.
[0046] The information acquisition section 220 has a function of acquiring travel information for the vehicle 12 from the respective ECUs 22 through CAN information. Note that, for example, the information acquisition section 220 acquires steering angle information from the steering ECU 22A, and acquires indicator light actuation information from the body ECU 22B. The information acquisition section 220 may also acquire weather information, traffic information, and the like from an external server via the DCM 22C.
[0047] The detection section 230 has a function of detecting any objects O in a captured image captured by the camera 24. The object O may be a vehicle OV traveling on the road, or may be a pedestrian OP crossing the road (see
[0048] The generation section 240 has a function of generating determination areas DA according to the direction of movement of the vehicle 12 based on the CAN information acquired by the information acquisition section 220, as well as the position and speed of the object O. As illustrated in
[0049] The generation section 240 also generates the determination areas DA according to the CAN information and the position and speed of the object O. The determination areas DA are areas set with respect to the basis area BA so as to have depth and so as to a have a width that has been increased or decreased with respect to the trajectory TL and trajectory TR. Note that in the present exemplary embodiment, three of the determination areas DA are set, namely a first area A1, a second area A2, and a third area A3.
[0050] The first area A1 is a determination area DA solely based on the position of the vehicle 12. In cases in which the object O is a vehicle OV, as illustrated in Table 1, the first area A1 is set with a depth range spanning from the vehicle 12 to 8 m away from the vehicle 12, and is normally set with a width range corresponding to the track width. Moreover, in cases in which an indicator light is actuated, the first area A1 is set wider than normal, namely to a track width range+1 m. The single-dotted dashed lines in
TABLE-US-00001 TABLE 1 Vehicle OV First area A1 Depth Left-right width Risk level 8 m from vehicle 12 Track width of vehicle 12 1.0 Track width of vehicle 12 + 1.0 1 m on left/right (when indicator light actuated)
[0051] In cases in which the object O is a pedestrian OP, as illustrated in Table 2, the first area A1 is set with a depth range spanning from the vehicle 12 to 8 m away from the vehicle 12, and set with a width range corresponding to the track width+2 m. Namely, the first area A1 is wider in cases in which the object O is the pedestrian OP than in cases in which the object O is the vehicle OV. The dashed lines in
TABLE-US-00002 TABLE 2 Pedestrian OP First area A1 Depth Left-right width Risk level 8 m from vehicle 12 Track width of vehicle 12 + 1.0 2 m on left/right
[0052] The second area A2 is a determination area DA that reflects a vehicle front-rear direction speed of the object O. The second area A2 is set as illustrated in Table 3 both in cases in which the object O is a vehicle OV and in cases in which the object O is a pedestrian OP. The second area A2 is set with depth ranges spanning from the vehicle 12 to 8 m to 14 m away from the vehicle 12, and set with a width range corresponding to the track width. In cases in which the object O has entered the second area A2, a risk level of 1.0 is applied if within a range spanning 8 m from the vehicle 12, a risk level of 0.9 is applied if within a range spanning 12 m from the vehicle 12, and a risk level of 0.8 is applied if within a range spanning 14 m from the vehicle 12.
[0053] Moreover, in cases in which the depth range is set so as to span 12 m from the vehicle 12 and an indicator light is actuated, the second area A2 is set with a width range corresponding to the track width+1 m. A risk level of 0.9 is applied in cases in which an object O has entered this second area A2. In cases in which the depth range is set so as to span 14 m from the vehicle 12 and an indicator light is actuated, the second area A2 is set with a width range corresponding to the track width+2 m. The dashed lines in
TABLE-US-00003 TABLE 3 Vehicle OV or pedestrian OP Second area A2 Depth Left-right width Risk level 8 m from vehicle 12 Track width of vehicle 12 1.0 12 m from vehicle 12 0.9 14 m from vehicle 12 0.8 12 m from vehicle 12 Track width of vehicle 12 + 0.9 1 m on left/right (when indicator light actuated) 14 m from vehicle 12 Track width of vehicle 12 + 0.8 2 m on left/right (when indicator light actuated)
[0054] The third area A3 is a determination area DA that reflects a left-right direction speed of the object O. The third area A3 is set as illustrated in Table 4 both in cases in which the object O is a vehicle OV and in cases in which the object O is a pedestrian OP. The third area A3 is set with a depth range spanning from the vehicle 12 to 8 m away from the vehicle 12, and set with a specific width range. In cases in which the object O has entered the third area A3, a risk level of 1.0 is applied if the object is at a left-right direction width position corresponding to the track width range, a risk level of 0.8 is applied if the object is at a left-right direction width position corresponding to the track width range+an inner/outer wheel trajectory difference, and a risk level of 0.5 is applied if the object is at a left-right direction width position corresponding to a range of a path of the vehicle 12+an inner/outer wheel trajectory difference+a human stopping distance. The solid line in
TABLE-US-00004 TABLE 4 Vehicle OV or pedestrian OP Third area A3 Depth Left-right width Risk level 8 m from vehicle 12 Track width of vehicle 12 1.0 Track width of vehicle 12 + 0.8 inner/outer turning sweep Path of vehicle 12 + 0.5 inner/outer turning sweep + human stopping distance
[0055] As illustrated in
[0056] Note that the danger level for the first area A1 is computed using Equation 1:
Danger level=risk level Equation 1
[0057] According to Equation 1, this determination is based solely on the position of the vehicle 12, and the danger level is a value equivalent to the risk level.
[0058] The danger level for the second area A2 is computed using Equation 2:
Danger level=risk level×Min(30,speed difference with object O)/30 Equation 2
[0059] Note that the speed difference is measured in units of km/h.
[0060] According to Equation 2, this determination reflects the vehicle front-rear direction speed of the object O, and the danger level is a value corresponding to the risk level or lower.
[0061] The danger level for the third area A3 is computed using Equation 3:
Danger level=risk level+0.5×x Equation 3
[0062] Note that x=1 when an object O on a left side of the vehicle is moving toward the right or when an object O on a right side of the vehicle is moving toward the left.
[0063] x=0 in all other cases.
[0064] According to Equation 3, this determination reflects the left-right direction speed of the object O, and the danger level is a value corresponding to the risk level+0.5 in cases in which the object O is approaching the vehicle 12.
[0065] The output section 260 has a function of outputting caution information to the reporting device 25 in cases in which the determination section 250 has determined that danger is present. When the output section 260 outputs such caution information, the reporting device 25 displays an image on the monitor 26 to prompt the driver D to exercise caution, and outputs audio or an alarm from the speaker 28 to prompt the driver D to exercise caution.
[0066] Control Flow
[0067] Explanation follows regarding a flow of the determination processing and the report processing executed by the controller 20 of the present exemplary embodiment, with reference to
[0068] First, explanation follows regarding a flow of the determination processing, with reference to the flowchart of
[0069] At step S100 in
[0070] At step S101, the CPU 20A acquires image information relating to a captured image captured by the camera 24.
[0071] At step S102, the CPU 20A estimates the horizon. The horizon is estimated using known technology. For example, the CPU 20A may detect straight line components of a road such as white lines on the road, and estimate horizon coordinates from an extracted point where all the straight lines intersect.
[0072] At step S103, the CPU 20A detects any objects O in the captured image. Specifically, the CPU 20A detects an object O such as a vehicle OV or a pedestrian OP using a known image recognition method or the like.
[0073] At step S104, the CPU 20A executes tracking. The object O detected at step S103 is thus tracked.
[0074] At step S105, the CPU 20A estimates a distance to the tracked object O. Specifically, a bounding box BB (see
[0075] At step S106, the CPU 20A estimates the danger level at a current position. Specifically, in a case in which the object O is a vehicle OV, the CPU 20A defines a first area A1 as a determination area DA according to Table 1, and in cases in which the object O is a pedestrian OP, the CPU 20A defines a first area A1 as a determination area DA according to Table 2. The CPU 20A then substitutes a risk level applied according to the object O present in the first area A1 into Equation 1 to find the danger level. For example, in a case in which a pedestrian OP is present in the first area A1 as illustrated in
[0076] At step S107, the CPU 20A estimates a danger level for front-rear speed. Specifically, in cases in which objects O are a vehicle OV and a pedestrian OP, the CPU 20A defines second areas A2 as determination areas DA according to Table 3. The CPU 20A then substitutes a risk level applied according to an object O present in the corresponding second area A2 into Equation 2 to find the danger level. For example, in a case in which the vehicle OV is present in the second area A2 set with a range spanning 12 m from the vehicle 12 and corresponding to the track width as illustrated in
[0077] At step S108, the CPU 20A estimates the danger level for left-right speed. Specifically, in cases in which objects O are a vehicle OV and a pedestrian OP, the CPU 20A defines third areas A3 as determination areas DA according to Table 4. The CPU 20A then substitutes a risk level applied according to an object O present in the corresponding third area A3 in Equation 3 to find the danger level. For example, in a case in which a pedestrian OP is present in the third area A3 set with a range spanning 8 m from the vehicle 12 and corresponding to the track width as illustrated in
[0078] At step S109, the CPU 20A determines whether or not any one of the danger levels computed for the respective determination areas DA exceeds the threshold. In the present exemplary embodiment, the threshold is set to 0.8. In cases in which the CPU 20A determines that any one of the danger levels exceeds the threshold, processing proceeds to step S110. On the other hand, in cases in which the CPU 20A determines that none of the danger levels exceeds the threshold, namely that all the danger levels are the threshold or lower, processing proceeds to step S111.
[0079] At step S110, the CPU 20A makes a determination of “danger”, indicating that there is a high probability that the vehicle 12 will contact the object O if the vehicle 12 continues on its present course.
[0080] At step S111, the CPU 20A makes a determination of “no danger”, indicating that there is a low probability that the vehicle 12 will contact the object O even if the vehicle 12 continues on its present course.
[0081] At step S112, the CPU 20A determines whether or not to end the determination processing. In cases in which the CPU 20A makes a determination to end the determination processing, the determination processing is ended. On the other hand, in cases in which the CPU 20A makes a determination not to end the determination processing, processing returns to step S100.
[0082] Next, explanation follows regarding a flow of the report processing, with reference to the flowchart of
[0083] At step S200 in
[0084] At step S201, the CPU 20A determines whether or not the reporting device 25 has yet to report. In cases in which the CPU 20A determines that the reporting device 25 has yet to report, processing proceeds to step S202. On the other hand, in cases in which the CPU 20A determines that the reporting device 25 is not yet to report, namely that the reporting device 25 is currently reporting, processing returns to step S200.
[0085] At step S202, the CPU 20A starts reporting by outputting caution information to the reporting device 25. The reporting device 25 thus displays text such as “release accelerator” on the monitor 26, and outputs an alarm from the speaker 28.
[0086] At step S203, the CPU 20A determines whether or not the reporting device 25 is currently reporting. In cases in which the CPU 20A determines that the reporting device 25 is currently reporting, processing proceeds to step S204. On the other hand, in cases in which the CPU 20A determines that the reporting device 25 is not currently reporting, namely that the reporting device 25 is yet to report, processing returns to step S200.
[0087] At step S204, the CPU 20A stops output of the caution information to the reporting device 25 and ends the reporting. The display on the monitor 26 and the alarm from the speaker 28 of the reporting device 25 are thus ended.
SUMMARY
[0088] The detection section 230 implemented by the controller 20 of the present exemplary embodiment detects an object O in a captured image captured by the camera 24 provided to the vehicle 12, and the generation section 240 generates determination areas DA according to the direction of movement of the vehicle 12. Three types of determination area DA, namely the first area A1 to the third area A3, are generated based on CAN information, as well as on the position and speed of the object O. Specifically, the first area A1 based solely on the position of the vehicle 12, the second area A2 reflecting the vehicle front-rear direction speed of the object O, and the third area A3 reflecting the left-right direction speed of the object O are respectively generated.
[0089] In cases in which the object O is present in any of the respective determination areas DA, the determination section 250 implemented by the controller 20 determines danger to be present according to the extent of its presence. In cases in which danger has been determined to be present, the controller 20 then reports the presence of danger to the driver D using the reporting device 25. The present exemplary embodiment thereby enables danger to be determined in consideration of information regarding the position, speed, and the like of the object O, as well as the CAN information of the vehicle 12. Since the generation section 240 generates the first area A1 to the third area A3, the present exemplary embodiment is moreover capable of taking plural conditions, such as the position and speed of the object O, into account during danger determination, thereby enabling danger determination to be performed according to the circumstances.
[0090] The determination section 250 implemented by the controller 20 of the present exemplary embodiment further quantifies the dangerousness of the object O present in a determination area DA as a danger level, and determines whether or not danger is present based on whether or not the danger level exceeds the threshold. The present exemplary embodiment thus enables danger to be determined according to the extent of a positional relationship between the vehicle 12 and the object O.
[0091] The determination section 250 of the present exemplary embodiment further performs the danger determination based on the captured image in ten frames. By maintaining danger determination over a prescribed duration, the present exemplary embodiment is thus capable of obtaining determination results that err on the safe side, even if determination results regarding a given object O vary between individual frames.
REMARKS
[0092] Although the determination section 250 executes determination based on the first area A1, determination based on the second area A2, and determination based on the third area A3 in the present exemplary embodiment, the types of determination area DA are not limited to the first area A1 to the third area A3.
[0093] Although the determination section 250 computes danger levels and executes determination in sequence from the first area A1 through to the third area A3, the determination sequence is not limited thereto. The determination sequence may be modified according to the number of occupants in the vehicle 12, the content of the CAN information, weather conditions, or the like. In particular, for example, determination based on the third area A3 that relates to the left-right direction may be prioritized during rainy weather, in consideration of poor vehicle width direction visibility.
[0094] Although the threshold is set to a fixed value of 0.8 in the present exemplary embodiment, there is no limitation thereto, and the threshold may be modified according to the number of occupants in the vehicle 12, the content of the CAN information, weather conditions, or the like. For example, the threshold may be set so as to become lower as the steering angle of the steering wheel 14 acquired through the CAN information increases. Namely, the determination section 250 may perform determination by employing a threshold that is set lower as the steering angle of the steering wheel acquired through the CAN information configuring the travel information increases.
[0095] Although the steering angle information for the steering wheel 14 and the indicator light actuation information are acquired as the CAN information configuring the travel information of the vehicle 12, and these are employed in danger determination in the present exemplary embodiment, the CAN information employed in determination is not limited thereto. For example, brake actuation information, acceleration sensor information, millimeter-wave radar sensor information, or the like may be acquired through the CAN information and employed in danger determination.
[0096] Note that the various processing executed by the CPU 20A reading and executing software (programs) in the exemplary embodiment described above may be executed by various types of processor other than a CPU. Such processors include programmable logic devices (PLD) that allow circuit configuration to be modified post-manufacture, such as a field-programmable gate array (FPGA), and dedicated electric circuits, these being processors including a circuit configuration custom-designed to execute specific processing, such as an application specific integrated circuit (ASIC). The processing described above may be executed by any one of these various types of processor, or by a combination of two or more of the same type or different types of processor (such as plural FPGAs, or a combination of a CPU and an FPGA). The hardware structure of these various types of processors is more specifically an electric circuit combining circuit elements such as semiconductor elements.
[0097] Moreover, in the exemplary embodiment described above, explanation has been given regarding a configuration in which the respective programs are pre-stored (installed) on a computer-readable non-transitory storage medium. For example, the processing program 100 of the controller 20 is pre-stored in the ROM 20B. However there is no limitation thereto, and the respective programs may be provided in a format recorded on a non-transitory storage medium such as a compact disc read only memory (CD-ROM), digital versatile disc read only memory (DVD-ROM), or universal serial bus (USB) memory. Alternatively, the programs may be provided in a format downloadable from an external device over a network.
[0098] The processing of the exemplary embodiment described above is not limited to being executed by a single processor, and may be executed by plural processors working in coordination. The processing flows described in the above exemplary embodiment are merely examples, and unnecessary steps may be omitted, new steps may be added, and the processing sequence may be changed within a range not departing from the spirit thereof.