Visual perception assistance system and visual-perception target object detection system
10717447 ยท 2020-07-21
Assignee
Inventors
- Tadashi Isa (Kyoto, JP)
- Masatoshi Yoshida (Okazaki, JP)
- Richard Veale (Kyoto, JP)
- Yusaku Takeda (Hiroshima, JP)
- Toshihiro Hara (Higashihiroshima, JP)
- Koji Iwase (Hiroshima, JP)
- Atsuhide Kishi (Hiroshima, JP)
- Kazuo Nishikawa (Hiroshima, JP)
- Takahide Nozawa (Hiroshima, JP)
Cpc classification
B60W50/14
PERFORMING OPERATIONS; TRANSPORTING
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06V20/58
PHYSICS
G06V20/597
PHYSICS
B60K2360/149
PERFORMING OPERATIONS; TRANSPORTING
G02B2027/0141
PHYSICS
G08G1/166
PHYSICS
G08G1/09623
PHYSICS
B60R2300/308
PERFORMING OPERATIONS; TRANSPORTING
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
B60K35/28
PERFORMING OPERATIONS; TRANSPORTING
B60K35/10
PERFORMING OPERATIONS; TRANSPORTING
G02B27/00
PHYSICS
G02B27/0093
PHYSICS
International classification
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G02B27/00
PHYSICS
G08G1/0962
PHYSICS
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
The present invention includes: a visual perception target determination unit for determining a visual perception target by using a saliency model for determining that an object to be visually perceived at a glance is a visual perception target, a surprise model for determining that an object behaving abnormally is a visual perception target, and a normative model for determining that an object to be visually perceived by a viewing action of a driver serving as a norm is a visual perception target; and a visual guidance unit for determining whether or not an overlooked visual perception to which a line-of-sight direction detected by a line-of-sight detection unit is not directed is present, and guiding a line-of-sight of a driver toward the overlooked visual perception target, when the overlooked visual perception target is present.
Claims
1. A visual perception assistance system, comprising: a surrounding condition acquisition unit for acquiring a surrounding condition of a moving body to be driven by a driver; a visual perception target determination unit for determining a visual perception target being an object that the driver should look at in a surrounding condition acquired by the surrounding condition acquisition unit; a line-of-sight direction detection unit for detecting a line-of-sight direction of the driver; and a visual guidance unit for determining whether or not an overlooked visual perception target to which a line-of-sight direction detected by the line-of-sight direction detection unit is not directed is present among the visual perception target determined by the visual perception target determination unit, and guiding a line-of-sight of the driver toward the overlooked visual perception target, when the overlooked visual perception target is present, wherein the visual perception target determination unit determines the visual perception target, based on three visual perception models prepared in advance, and the three visual perception models include a saliency model for determining that an object to be visually perceived at a glance is a visual perception target, a surprise model for determining that an object behaving abnormally is a visual perception target, and a normative model for determining that an object to be visually perceived by a viewing action of a driver serving as a norm is a visual perception target.
2. The visual perception assistance system according to claim 1, wherein the moving body is constituted by a vehicle, and the surrounding condition acquisition unit is constituted by a camera for photographing an area ahead of the vehicle.
3. The visual perception assistance system according to claim 1, wherein the visual guidance unit performs visual guidance by displaying an index at a position of the overlooked visual perception target.
4. The visual perception assistance system according to claim 1, further comprising: a driver state detection unit for detecting the driver state, wherein the visual guidance unit changes a degree of intensity of visual guidance depending on a driver state to be detected by the driver state detection unit.
5. The visual perception assistance system according to claim 4, wherein the visual guidance unit changes the degree of intensity of visual guidance by changing a degree of conspicuousness of an index to be displayed at a position of the overlooked visual perception target.
6. The visual perception assistance system according to claim 4, wherein the driver state detection unit detects at least a state that driving load is large as the driver state, and when the driver state detection unit detects that the driving load is large, the visual guidance unit emphasizes visual guidance, as compared with a case where the driver state detection unit detects that the driving load is small.
7. The visual perception assistance system according to claim 4, wherein the driver state detection unit detects at least a state that the driver is absent-minded, as the driver state, and when the driver state detection unit detects that the driver is absent-minded, the visual guidance unit emphasizes visual guidance, as compared with a case where the driver state detection unit detects that the driver is not absent-minded.
8. The visual perception assistance system according to claim 1, wherein when a plurality of the overlooked visual perception targets are present, the visual guidance unit ranks the plurality of overlooked visual perception targets in a viewing order, and performs visual guidance of the plurality of overlooked visual perception targets by the ranking.
9. A visual perception target detection system, comprising: a surrounding condition acquisition unit for acquiring a surrounding condition of an observer; and a visual perception target determination unit for determining a visual perception target being an object that the observer should look at in a surrounding condition acquired by the surrounding condition acquisition unit, wherein the visual perception target determination unit determines the visual perception target, based on three visual perception models prepared in advance, and the three visual perception models include a saliency model for determining that an object to be visually perceived at a glance is a visual perception target, a surprise model for determining that an object behaving abnormally is a visual perception target, and a normative model for determining that an object to be visually perceived by a viewing action of an observer serving as a norm is a visual perception target.
10. The visual perception target detection system according to claim 9, further comprising: a display unit for displaying a visual perception target determined by the visual perception target determination unit.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
DESCRIPTION OF EMBODIMENTS
(8)
(9) The controller U receives signals from device components such as various types of sensors, the camera 1, the eye camera 2, the driver state detection unit 3, the vehicle state detection unit 4, and the navigation device 5.
(10) The camera 1 is a camera for photographing an area ahead of the own vehicle. A surrounding condition ahead of the own vehicle is acquired by the camera 1. The camera 1 is constituted by a color stereo camera, for example. Thus, it is possible to detect a distance from the own vehicle to a visual perception target, and a color of the visual perception target. In this example, a distance to a visual perception target is detected by using the camera 1. The present invention, however, is not limited to the above. A distance to a visual perception target may be detected by using a radar device.
(11) The eye camera 2 is mounted in a vehicle compartment of the own vehicle, and detects a line-of-sight direction of a driver. The eye camera 2 may acquire an image of the driver's eyes, extract a reference point such as a corner of the eye or an inner corner of the eye, and a moving point such as the pupil from the acquired image, and detect a line-of-sight direction within an actual space, based on a position of the moving point with respect to the reference point. The line-of-sight direction is represented by a straight line within a three-dimensional coordinate space in which the own vehicle is set as a reference, for example. In addition to the above, the eye camera 2 also detects a line-of-sight state of the driver (a line-of-sight movement, blinking, eyeball fixation, and a pupil diameter).
(12) The driver state detection unit 3 includes, for example, an image sensor such as a CCD camera and a CMOS camera, for example, and detects a facial expression of the driver by photographing a face image of the driver. Further, the driver state detection unit 3 acquires a line-of-sight state (a line-of-sight movement, blinking, eyeball fixation, and a pupil diameter) detected by the eye camera 2. Further, the driver state detection unit 3 includes, for example, a heart rate sensor provided on a driver seat, and detects a heart rate of the driver. Further, the driver state detection unit 3 includes, for example, a heart rate sensor, or an image sensor such as a CCD camera and a CMOS camera, and detects a breathing state (a respiratory rate or a depth of breathing). The driver state detection unit 3 includes, for example, a resistance sensor, and detects a skin resistance. Further, the driver state detection unit 3 includes, for example, a pulse wave sensor provided on a steering wheel, and detects a fingertip pulse wave. The driver state detection unit 3 includes, for example, a 6-channel myoelectric sensor provided on the steering wheel, and detects EMG of the upper limb muscles. The driver state detection unit 3 includes, for example, a 3-channel myoelectric sensor provided on the driver seat, and detects EMG of the lower limb muscles. Further, the driver state detection unit 3 includes, for example, a microphone, and detects voice information. The voice information includes, for example, a tone of voice. The driver state detection unit 3 is, for example, constituted by a load sensor provided on the driver seat, and detects a seating pressure with respect to the driver seat.
(13) The driver state detection unit 3 calculates an assessment value of a driver state by inputting a facial expression, a line-of-sight state, a heart rate, a breathing state, a skin resistance, a fingertip pulse wave, EMG of the upper limb muscles, EMG of the lower limb muscles, voice information, and a seating pressure described above in a predetermined mathematical expression for use in assessing the driver state. Then, the driver state detection unit 3 may detect a driver state from the assessment value. In this example, an assessment value indicating a degree of awakening may be used as the assessment value of the driver state. An assessment value, for example, increases in the plus direction, as a degree of awakening of a driver increases; and an assessment value increases in the minus direction, as a degree of absent-mindedness of a driver increases. Therefore, the driver state detection unit 3 may determine that a driver is awakened, when an assessment value is larger than a predetermined plus reference value, and may determine that a driver is absent-minded, when an assessment value is smaller than a predetermined minus reference value (is large in a minus direction). Note that a facial expression includes, for example, an expression of joy, an expression of anger, and an expression of sadness; and is quantified by predetermined numerical values with respect to these expressions. Further, the voice information includes joy, anger, sadness, and the like; and is quantified by predetermined numerical values with respect to these feelings.
(14) The vehicle state detection unit 4 includes, for example, a vehicle speed sensor, and detects a speed of the own vehicle. Further, the vehicle state detection unit 4 includes, for example, a speed sensor for detecting an engine speed, and detects the engine speed. Further, the vehicle state detection unit 4 includes, for example, a steering angle sensor, and detects a steering angle of the wheels. Further, the vehicle state detection unit 4 includes, for example, a wiper operation sensor for detecting an operating condition of a wiper, and detects the operating condition of the wiper. Note that the operating condition of the wiper is used for detecting a weather (e.g. a rainy weather or a snowy weather). Further, the vehicle state detection unit 4 includes, for example, a light sensor for detecting an operating condition of a light mounted in the own vehicle, and detects the operating condition of the light. Note that the operating condition of the light is used for detecting the day and night, for example.
(15) The navigation device 5 includes a GPS sensor, and a processor for searching a route to a destination. The navigation device 5 acquires road information relating to conditions of a road on which the own vehicle is currently traveling, and road conditions ahead of the own vehicle. The road information to be acquired in this example includes, for example, information relating to a highway, an open road, a straight road, a curved road, an intersection, contents of various types of road signs, presence of a traffic signal, and current position information of the own vehicle.
(16) As will be described later, the controller U controls the projection mapping device 11, the head-up display 12, and the speaker 13 in order to guide a line-of-sight of the driver toward a visual perception target (an overlooked visual perception target), which may be overlooked by the driver.
(17) The controller U includes a visual perception target determination unit 24 and a visual guidance unit 25. The visual perception target determination unit 24 determines a visual perception target that the driver should visually perceive by using three types of visual perception models i.e. a saliency model 21, a surprise model 22, and a normative model 23. The saliency model 21 includes data for determining that an object having saliency and the driver can visually perceive at a glance, as a visual perception target. An object to be detected by using the saliency model 21 is, for example, an object of a relatively large size, an object of a relatively large brightness, an object in which the contrast is relatively strong, and an object of a unique shape, among objects to be visually perceived by the driver during driving. Numerous pieces of data for use in detecting an object to be visually perceived at a glance are stored in the saliency model 21.
(18) The surprise model 22 includes data for determining an object behaving abnormally (action against which precautions are required) as a visual perception target. The surprise model 22 includes, for example, data for determining an object such as a vehicle and a pedestrian behaving abnormally, as a visual perception target. Numerous pieces of data for use in detecting an object such as a vehicle, a pedestrian, and a motorcycle, which may behave abnormally, and data for use in detecting that each object behaves abnormally are stored in the surprise model 22 in association with each other.
(19) The normative model 23 includes data for determining that an object to be visually perceived by a viewing action of a skillful driver serving as a norm is a visual perception target. A target the driver should look at, a position the driver should look at, an order of objects in which the driver should look at, and the like are stored in the normative model 23 in association with numerous traveling conditions (combinations of traveling environments including surrounding conditions, vehicle states, and the like).
(20) The visual guidance unit 25 determines whether or not an overlooked visual perception target to which a line-of-sight direction to be detected by the eye camera 2 is not directed is present among visual perception targets determined by the visual perception target determination unit 24. When an overlooked visual perception target is present, the visual guidance unit 25 guides a line-of-sight of the driver toward the overlooked visual perception target. The visual guidance unit 25 may guide the line-of-sight by using the projection mapping device 11, the head-up display 12, and the speaker 13.
(21) The aforementioned three types of visual perception models are described one by one by using an automobile as an example.
(22) In
(23) In
(24) In the saliency model 21, an object to be visually perceived at a glance is determined to be a visual perception target. Therefore, for example, in the example of
(25) It is highly likely that a visual perception target to be determined by the saliency model 21 is visually perceived by the driver. Therefore, even when the line-of-sight direction of the driver does not completely coincide with a direction in which a visual perception target determined by the saliency model 21 is present, it is determined that the driver visually perceives the visual perception target, when the line-of-sight direction of the driver is directed toward the visual perception target to some extent. In the example of
(26) In this example, the expression completely coincide means that a visual perception target and a line-of-sight direction intersect each other, for example. Further, the expression the line-of-sight direction of the driver is directed toward the visual perception target to some extent means that the visual perception target is present within a field of view including the line-of-sight direction as a center. The visual guidance unit 25 may set, as a field of view, a substantially conical area, which is determined in advance and includes a line-of-sight direction as a center.
(27) On the other hand, when a line-of-sight direction of the driver is largely deviated from the traffic light 45, and it is judged that the driver does not visually perceive the traffic light 45, the traffic light 45 is determined to be an overlooked visual perception target. In this case, for example, an index 1 indicated by a round shape of a dotted line is projected toward the traffic light 45 by the projection mapping device 11 disposed within the vehicle compartment. Then, the index 1 is displayed on the front window glass 31 in such a manner that the index 1 overlaps the traffic light 45 at a position of the traffic light 45, and the light-of-sight of the driver is guided to the index 1. Note that the expression largely deviated means that a visual perception target is present outside the field of view, for example.
(28) Likewise, when it is determined that the driver does not visually perceive the signboard 46, an index 2 is projected toward the signboard 46. The index 2 is displayed on the front window glass 31 in such a manner that the index 2 overlaps the signboard 46 at a position of the signboard 46.
(29) Note that the index 1 or the index 2 may be displayed by using the head-up display 12 disposed at a front position of the driver, for example. Further, a voice guide such as pay attention to the traffic light on the left side! or pay attention to the signboard on the right side! may be output from the speaker 13, for example. The visual perception targets illustrated in
(30) As will be described later, an object that the driver does not have to look at may be eliminated from visual perception targets. For example, the signboard 46 may be eliminated from visual perception targets. In this case, an object that should be eliminated may be stored in the saliency model 21 itself. Alternatively, a visual perception target that meets a predetermined elimination condition may be removed from visual perception targets determined by the saliency model 21. In this example, as the predetermined condition, for example, it is possible to use a condition that a visual perception target determined by the saliency model 21 is away from the own vehicle by a predetermined distance or more, and a condition that a visual perception target is a fixed object located on the side of the opposite lane.
(31) Next, the surprise model 22 is described with reference to
(32) Next, the normative model 23 is described with reference to
(33)
(34) Further, a left side mirror 35L for use in perceiving a condition on the left side, an intermediate portion of a left peripheral portion of the front window glass 31 in the up-down direction which is likely to become a blind spot, and a meter panel 37 for use in checking the vehicle speed and the like are determined to be visual perception targets. Therefore, the visual guidance unit 25 projects an index 3 toward the left side mirror 35L, projects an index 4 toward the intermediate portion of the left peripheral portion of the front window glass 31, and projects an index 5 toward the meter panel 37 by using the projection mapping device 11.
(35) The normative model 23 stores data indicating objects to be visually perceived by a viewing action of a skillful driver serving as a norm, and data indicating positions of the objects in all possible (numerous) conditions. Further, the normative model 23 also stores a cycle at which the driver should visually perceive. For example, when it is assumed that a skillful driver looks at the left side mirror 35L at least once per ten seconds, for example, data such that once per ten seconds for the left side mirror 35L are used as a cycle at which the driver should visually perceive.
(36) In
(37) In this example, when the driver state detection unit 3 detects that driving load of the driver is large or that the driver is absent-minded, the visual guidance unit 25 may emphasize visual guidance. For example, in
(38) In addition to the above, the visual guidance unit 25 may emphasize visual guidance by using the head-up display 12 and the speaker 13, in addition to the projection mapping device 11. Note that the driver state detection unit 3 may determine a magnitude of driving load by adding a magnitude of the number of overlooked visual perception targets, a degree of difficulty of a traveling environment, or the like. For example, when the number of overlooked visual perception targets is larger than a predetermined threshold value, the driver state detection unit 3 may determine that driving load is large. When the number of overlooked visual perception targets is equal to or smaller than the predetermined threshold value, the driver state detection unit 3 may determine that driving load is small.
(39) Further, the driver state detection unit 3 may calculate, for example, a degree of difficulty of a traveling environment, based on road conditions around the own vehicle acquired from the navigation device 5, and the number of visual perception targets. When the degree of difficulty is larger than a predetermined threshold value, the driver state detection unit 3 may determine that driving load is large; and when the degree of difficulty is equal to or smaller than the predetermined threshold value, the driver state detection unit 3 may determine that driving load is small. A degree of difficulty may be calculated, for example, by using a predetermined function, in which a large value is output, as the number of visual perception targets increases, and a value depending on a type of road conditions is output. As the value depending on a type of road conditions, for example, a value such that an intersection is larger than a straight road may be used.
(40)
(41) In this case, the visual perception target determination unit 24 determines the motorcycle B as a visual perception target by using the aforementioned three types of visual perception models. Since the line-of-sight of the driver J is not directed toward the motorcycle B, which is determined to be a visual perception target, the visual guidance unit 25 determines the motorcycle B as an overlooked visual perception target. Then, the visual guidance unit 25 projects an index toward the motorcycle B. Thus, the index is displayed at a position on the front window glass 31 associated with the motorcycle B, and the line-of-sight of the driver J is guided to the motorcycle B.
(42) In this case, since the driver is absent-minded, the visual guidance unit 25 may increase a degree of emphasis of visual guidance, as compared with a case where the driver is in a normal condition. For example, the visual guidance unit 25 may display an index clearly, may display an index in a blinking manner, or may display an index with a color of a high saturation, as compared with an ordinary display pattern.
(43)
(44) Note that the driver state detection unit 3 may determine whether or not the own vehicle is traveling in a suburb area of monotonous landscape by using a current position of the vehicle and road conditions acquired from the navigation device 5, and a vehicle speed detected by the vehicle state detection unit 4, and the like.
(45) Next, a control example by the controller U is described with reference to the flowchart illustrated in
(46) In Q2, the camera 1 acquires an image indicating a surrounding condition ahead of the own vehicle by photographing an area ahead of the own vehicle.
(47) In Q3, the visual perception target determination unit 24 applies the saliency model 21 to the image acquired in Q2, and determines a visual perception target. In Q4, the visual perception target determination unit 24 applies the surprise model 22 to the image acquired in Q2, and determines a visual perception target. In Q5, the visual perception target determination unit 24 applies the normative model 23 to the image acquired in Q2, and determines a visual perception target.
(48) In Q6, the visual perception target determination unit 24 collects the visual perception targets determined in Q3 to Q5. In Q6, the visual perception target determination unit 24 also performs processing of eliminating an object that the driver does not have to look at from the visual perception targets determined in Q3 to Q5. For example, an object far from the own vehicle, and a fixed object to which the driver does not have to pay attention are eliminated from the visual perception targets.
(49) In Q7, the eye camera 2 detects a line-of-sight direction of the driver.
(50) In Q8, the visual guidance unit 25 determines whether or not a visual perception target outside the field of view of the driver is present among the visual perception targets collected in Q6. When a visual perception target outside the field of view is present, the visual guidance unit 25 extracts the visual perception target as an overlooked visual perception target. More specifically, the visual guidance unit 25 records a line-of-sight direction of the driver for a predetermined period in the past. Then, the visual guidance unit 25 may set a field of view for each of the recorded line-of-sight directions, and may determine a visual perception target outside the set field of view, as an overlooked visual perception target, among the visual perception targets collected in Q6. Note that even when the entirety of a visual perception target is not present in a field of view, the visual guidance unit 25 may determine that the driver successfully visually perceives the visual perception target when a part of the visual perception target is present in the field of view.
(51) In Q9, when an overlooked visual perception target is not present (NO in Q9), the visual guidance unit 25 returns the processing to Q1. On the other hand, when an overlooked visual perception target is present (YES in Q9), the processing is proceeded to Q10.
(52) In Q10, the driver state detection unit 3 determines whether or not a driver state is such that driving load is large. When the driver state is such that driving load is large (YES in Q10), the visual guidance unit 25 sets visual guidance to emphasis is set (Q11).
(53) On the other hand, when the driver state is such that driving load is not large (NO in Q10), the driver state detection unit 3 determines whether or not the driver is absent-minded (Q12). When the driver is absent-minded (YES in Q12), the visual guidance unit 25 sets visual guidance to emphasis is set (Q11).
(54) When the driver is not absent-minded (NO in Q12), the visual guidance unit 25 sets visual guidance to emphasis is not set (Q13).
(55) In Q14, the visual guidance unit 25 determines whether or not a plurality of overlooked visual perception targets are present. When the number of overlooked visual perception targets is one (NO in Q14), the visual guidance unit 25 projects an index (e.g. the index 1 in
(56) On the other hand, when a plurality of overlooked visual perception targets are present in Q14 (YES in Q14), the visual guidance unit 25 ranks the plurality of overlooked visual perception targets in guiding a line-of-sight (Q16). In this case, the visual guidance unit 25 may rank objects in a descending order of a degree of hazard or a degree of caution. Further, the visual guidance unit 25 may rank objects in an ascending order of a distance from the own vehicle. For example, the visual guidance unit 25 may give, to each of a plurality of overlooked visual perception targets, a point such that a value thereof increases as a degree of hazard increases, a value thereof increases as a degree of caution increases, and a value thereof increases as a distance from the own vehicle decreases; and may rank visual perception targets in a descending order of a point. The point depending on a degree of hazard and a degree of caution may be set such that a value thereof is set depending on a type of object set in advance. For example, a point may be set high for a pedestrian crossing a road than a pedestrian walking on a sidewalk.
(57) In Q17, the visual guidance unit 25 displays indexes (e.g. the index 1 in
(58) When a plurality of overlooked visual perception targets are present (YES in Q14), the visual guidance unit 25 may simultaneously display indexes associated with the plurality of overlooked visual perception targets without ranking. When Q15 and Q17 are terminated, the processing returns to Q1.
(59) In the foregoing, an embodiment is described. The present invention, however, is not limited to the embodiment. Modifications are applicable as necessary within the scope of the claims. A vehicle constituted by a four-wheel automobile is exemplified as a moving body to be driven by a driver. This is merely an example. As the moving body, for example, vehicles (e.g. a motorcycle) other than a four-wheel automobile, various construction machines and machinery for construction work, transport machinery such as a forklift, which is frequently used in a factory or a construction site, vessels (particularly, small vessels), and airplanes (particularly, small airplanes) may be used. Further, the moving body may be a moving body (e.g. a drone or a helicopter), which is remotely controlled by an operator. Further, the moving body and the surrounding condition may be virtual. For example, a driving simulator corresponds to a virtual moving body or surrounding environment.
(60) The aforementioned visual perception assistance system is applied to a moving body. The present invention, however, is not limited to the above. The present invention may be directed to a detection system for detecting a visual perception target that an operator should look at in a construction site. Further, the present invention may be directed to a detection system for detecting a visual perception target that an observer should look at in an inspection process in a factory. Further, the present invention may be applied to assessing the interior of a shop, and the like.
(61) Specifically, the present invention may be directed to a detection system for detecting a visual perception target that an observer should look at by using the three visual perception models in a place other than a place where a moving body is used. In this case, in
(62) In a detection system, visual guidance may be performed by displaying a direction of an overlooked visual perception target by the head-up display 12 (an example of a display unit), in place of displaying an index at a position of an overlooked visual perception target. Further, in a detection system, visual guidance may be performed by audio guidance in which a direction of an overlooked visual perception target is guided by the speaker 13. In a detection system, an observer in a stationary state corresponds to a driver of a visual perception assistance system.
(63) Each step or a group of steps illustrated in
SUMMARY OF EMBODIMENT
(64) The following is a summary of technical features of the embodiment.
(65) A visual perception assistance system according to an aspect of the present invention includes:
(66) a surrounding condition acquisition unit for acquiring a surrounding condition of a moving body to be driven by a driver;
(67) a visual perception target determination unit for determining a visual perception target being an object that the driver should look at in a surrounding condition acquired by the surrounding condition acquisition unit;
(68) a line-of-sight direction detection unit for detecting a line-of-sight direction of the driver; and
(69) a visual guidance unit for determining whether or not an overlooked visual perception target to which a line-of-sight direction detected by the line-of-sight direction detection unit is not directed is present among the visual perception target determined by the visual perception target determination unit, and guiding a line-of-sight of the driver toward the overlooked visual perception target, when the overlooked visual perception target is present, wherein
(70) the visual perception target determination unit determines the visual perception target, based on three visual perception models prepared in advance, and
(71) the three visual perception models include a saliency model for determining that an object to be visually perceived at a glance is a visual perception target, a surprise model for determining that an object behaving abnormally is a visual perception target, and a normative model for determining that an object to be visually perceived by a viewing action of a driver serving as a norm is a visual perception target.
(72) According to the aforementioned configuration, it is possible to securely determine the visual perception target that the driver should visually perceive by using the three visual perception models. Further, in the aforementioned configuration, since the line-of-sight of the driver is guided toward the overlooked visual perception target, it is possible to prevent overlooking of the visual perception target.
(73) In the aforementioned configuration, the moving body may be constituted by a vehicle, and
(74) the surrounding condition acquisition unit may be constituted by a camera for photographing an area ahead of the vehicle.
(75) In this case, it is possible to prevent overlooking of the visual perception target when the driver drives the vehicle, and implement safety driving of the vehicle.
(76) In the aforementioned configuration, the visual guidance unit may perform visual guidance by displaying an index at a position of the overlooked visual perception target.
(77) In this case, since the index is displayed at the position of the overlooked visual perception target, it is possible to securely perform visual guidance.
(78) In the aforementioned configuration, the visual perception assistance system may further include a driver state detection unit for detecting the driver state, wherein
(79) the visual guidance unit may change a degree of intensity of visual guidance depending on a driver state to be detected by the driver state detection unit.
(80) In this case, it is possible to more accurately perform visual guidance by changing the degree of intensity of visual guidance depending on the driver state.
(81) In the aforementioned configuration, the visual guidance unit may change the degree of intensity of visual guidance by changing a degree of conspicuousness of an index to be displayed at a position of the overlooked visual perception target.
(82) In this case, since the index is displayed at the position of the overlooked visual perception target, it is possible to securely perform visual guidance. Further, it is possible to more accurately perform visual guidance by changing the degree of intensity of visual guidance depending on the driver state.
(83) In the aforementioned configuration, the driver state detection unit may detect at least a state that driving load is large as the driver state, and
(84) when the driver state detection unit detects that the driving load is large, the visual guidance unit may emphasize visual guidance, as compared with a case where the driver state detection unit detects that the driving load is small.
(85) In this case, it is possible to securely guide the line-of-sight of the driver toward the overlooked visual perception target, when the driving load is large i.e. overlooking of the visual perception target is likely to occur.
(86) In the aforementioned configuration, the driver state detection unit may detect at least a state that the driver is absent-minded, as the driver state, and
(87) when the driver state detection unit detects that the driver is absent-minded, the visual guidance unit may emphasize visual guidance, as compared with a case where the driver state detection unit detects that the driver is not absent-minded.
(88) In this case, it is possible to securely guide the line-of-sight of the driver toward the overlooked visual perception target, when the driver is absent-minded i.e. overlooking of the visual perception target is likely to occur.
(89) In the aforementioned configuration, when a plurality of the overlooked visual perception targets are present, the visual guidance unit may rank the plurality of overlooked visual perception targets in a viewing order, and may perform visual guidance of the plurality of overlooked visual perception targets by the ranking.
(90) In this case, when a plurality of overlooked visual perception targets are present, since visual guidance of the overlooked visual perception targets is performed by the ranking, it is possible to perform visual guidance in a descending order of importance with respect to the visual perception targets, for example.
(91) A visual perception target detection system according to another aspect of the present invention may include:
(92) a surrounding condition acquisition unit for acquiring a surrounding condition of an observer; and
(93) a visual perception target determination unit for determining a visual perception target being an object that the observer should look at in a surrounding condition acquired by the surrounding condition acquisition unit, wherein
(94) the visual perception target determination unit may determine the visual perception target, based on three visual perception models prepared in advance, and
(95) the three visual perception models may include a saliency model for determining that an object to be visually perceived at a glance is a visual perception target, a surprise model for determining that an object behaving abnormally is a visual perception target, and a normative model for determining that an object to be visually perceived by a viewing action of an observer serving as a norm is a visual perception target.
(96) According to the aforementioned configuration, it is possible to securely detect the visual perception target that the observer should look at.
(97) In the aforementioned configuration, the visual perception target detection system may further include a display unit for displaying a visual perception target determined by the visual perception target determination unit.
(98) In this case, the observer is able to easily and clearly grasp an object which may be overlooked but the observer should pay attention to.
INDUSTRIAL APPLICABILITY
(99) The present invention is advantageous in the field of automobiles, or in the field of monitoring a construction site and a factory, since it is possible to prevent overlooking of a visual perception target.