Display control apparatus
11750768 ยท 2023-09-05
Assignee
Inventors
- Yu Maeda (Nisshin, JP)
- Taketo Harada (Nisshin, JP)
- Mitsuyasu Matsuura (Nisshin, JP)
- Hirohiko Yanagawa (Kariya, JP)
- Muneaki Matsumoto (Kariya, JP)
Cpc classification
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06V20/58
PHYSICS
B60R2300/307
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/607
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/304
PERFORMING OPERATIONS; TRANSPORTING
H04N5/2628
ELECTRICITY
H04N7/18
ELECTRICITY
B60R2300/305
PERFORMING OPERATIONS; TRANSPORTING
H04N5/44504
ELECTRICITY
International classification
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06V20/58
PHYSICS
H04N5/262
ELECTRICITY
H04N5/272
ELECTRICITY
Abstract
An image processing unit identifies the shape of an obstacle that is identified from an area that appears in a peripheral image based on an image captured by a camera. The shape of the obstacle includes at least a tilt of a section of the obstacle in a road-surface direction. The section of the obstacle faces a vehicle. The image processing unit generates a superimposed image in which a mark image that is generated as a pattern that indicates the identified obstacle is superimposed onto a position that corresponds to the obstacle in the peripheral image. At this time, the image processing unit variably changes properties of the mark image based on the tilt of the obstacle identified by an obstacle identifying unit. The image processing unit then displays the generated superimposed image on display apparatus.
Claims
1. A display control apparatus that displays an image on a display apparatus that is provided inside a vehicle, the image being generated based on an image capturing a predetermined area in a periphery of the vehicle by a camera that is mounted in the vehicle, the display control apparatus comprising processing circuitry configured to: acquire a peripheral image that is the image based on the image captured by the camera; identify at least one of a shape, a tilt, a position, and a color of an obstacle that is identified from an area appearing in the peripheral image acquired by the processing circuitry; generate a superimposed image in which a mark image that is generated as a pattern indicating the obstacle identified by the processing circuitry is superimposed onto a position corresponding to the obstacle in the peripheral image, and display the generated superimposed image on the display apparatus; acquire, as the peripheral image, a bird's-eye-view image in which the image captured by the camera is changed to an image that is expressed by a bird's-eye-view; display the mark image so as to be superimposed onto the bird's-eye-view image; and variably change properties of the mark image based on a condition of at least one of the shape, the tilt, the position, or the color of the obstacle identified by the processing circuitry, wherein: the processing circuitry is configured to draw the mark image in a mode in which a lower end side of the obstacle is given more emphasis than an upper end side, by drawing the mark image so as to change the mark image continuously or in steps from the upper end side to the lower end side of the obstacle as depicted in the bird's eye image acquired as the peripheral image.
2. A display control apparatus that displays an image on a display apparatus that is provided inside a vehicle, the image being generated based on an image capturing a predetermined area in a periphery of the vehicle by a camera that is mounted in the vehicle, the display control apparatus comprising processing circuitry configured to: acquire a peripheral image that is the image based on the image captured by the camera; identify at least one of a shape, a tilt, a position, and a color of an obstacle that is identified from an area appearing in the peripheral image acquired by the processing circuitry; generate a superimposed image in which a mark image that is generated as a pattern indicating the obstacle identified by the processing circuitry is superimposed onto a position corresponding to the obstacle in the peripheral image, and display the generated superimposed image on the display apparatus; acquire, as the peripheral image, a bird's-eye-view image in which the image captured by the camera is changed to an image that is expressed by a bird's-eye-view; display the mark image so as to be superimposed onto the bird's-eye-view image; and variably change properties of the mark image based on a condition of at least one of the shape, the tilt, the position, or the color of the obstacle identified by the processing circuitry, wherein: the processing circuitry is configured to draw the mark image in a mode in which a lower end side of the obstacle is given more emphasis than an upper end side, by drawing the mark image so as to change at least one of a color, a concentration, and a transmittance of a pattern composing the mark image continuously or in steps from the upper end side to the lower end side of the obstacle as depicted in the bird's eye image acquired as the peripheral image.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) The above-described object, other objects, characteristics, and advantages of the present disclosure will be further clarified through the detailed description hereafter, with reference to the accompanying drawings. An overview of the drawings is as follows:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)
DESCRIPTION OF EMBODIMENTS
(42) An embodiment of the present disclosure will hereinafter be described with reference to the drawings. The present disclosure is not limited to the embodiment described below and may be carried out according to various modes.
(43) [Description of an Onboard Display System Configuration]
(44) A configuration of an onboard display system 10 according to the embodiment will be described with reference to
(45) The camera 11 is an imaging apparatus that is set so as to face the periphery, such as ahead, to the side, or to the rear, of the vehicle 1. The camera 11 is configured to capture an image of a peripheral area of the vehicle 1, and output data of an image (also referred to, hereafter, as a captured image) that expresses the image that has been captured to the image processing unit 14.
(46) The distance measuring unit 12 is a sensor that is configured to acquire information by scanning the area imaged by the camera 11. The information indicates the distance between an obstacle (such as another vehicle, a pedestrian, or a wall or a column of a building) that is present in the scanned area and the vehicle 1, and the direction of the obstacle when viewed from the vehicle 1. For example, the distance measuring unit 12 is realized by an ultrasonic sonar, a millimeter-wave radar, a laser radar, a stereo camera, a monocular camera, a periphery monitoring camera, or the like. The position, the shape of a border, the tilt of a face, and an approximate width of the obstacle can be recognized from the measurement results obtained by the distance measuring unit 12.
(47) The display unit 13 is a display that is configured to display the image information provided by the image processing unit 14. For example, the display unit 13 is provided in a location that is easily visible to a driver of the vehicle 1, such as in an instrument panel of the vehicle 1.
(48) The image processing unit 14 is an information processing apparatus that is mainly configured by a central processing unit (CPU), a random access memory (RAM), a read-only memory (ROM), a semiconductor memory such as a flash memory, an input/output interface, and the like (not shown). For example, the image processing unit 14 is realized by a microcontroller in which functions of a computer system are consolidated. The functions of the image processing unit 14 are actualized by the CPU running a program that is stored in a non-transitory tangible storage medium such as the ROM or the semiconductor memory. The image processing unit 14 may be configured by a single or a plurality of microcontrollers. The method for actualizing the functions of the image processing unit 14 is not limited to software. Some or all of the functions may be actualized through use of hardware combining logic circuits, analog circuits, and the like.
(49) The image processing unit 14 performs a distance measurement process and an obstacle display process based on the above-described program. A detailed description of these processes will be given hereafter.
(50) [Description of the Distance Measurement Process]
(51) The steps in the distance measurement process performed by the image processing unit 14 will be described with reference to a flowchart in
(52) At step S100, the image processing unit 14 measures the distance to an obstacle that is present in the periphery of the vehicle 1 using the distance measuring unit 12 and acquires positional information related to the obstacle. Specifically, the image processing unit 14 continuously scans the periphery of the vehicle 1 using detection waves of a radar, a sonar, or the like that configures the distance measuring unit 12, and receives reflected waves from the obstacle. The image processing unit 14 thereby acquires the positional information that indicates a distribution of the distance to an obstacle present in the scanned area. Alternatively, the positional information that indicates a distribution of the distance to an obstacle may be acquired through use of a known image recognition technology in which the distance to an object is recognized based on an image that is captured by a stereo camera, a monocular camera, a periphery monitoring camera, or the like.
(53) At step S102, the image processing unit 14 stores the positional information acquired at step S100, that is, the information that indicates a distribution of the distance between the vehicle 1 and an obstacle in the memory within the image processing unit 14. After step S102, the image processing unit 14 returns the process to step S100.
(54) [Description of the Obstacle Display Process]
(55) The steps in the obstacle display process performed by the image processing unit 14 will be described with reference to a flowchart in
(56) At step S200, the image processing unit 14 acquires the latest captured image amounting to a single frame from the camera 11. At step S202, the image processing unit 14 performs a coordinate transformation on the coordinates of the pixels that configure the captured image acquired at step S200 using a known technique for bird's-eye-view conversion, and thereby converts the captured image of the camera 11 to a bird's-eye-view image that simulates a state of overlooking from a viewpoint set above the vehicle 1.
(57) At step S204, the image processing unit 14 reads the latest positional information acquired through the above-described distance measurement process (see
(58) The mark image is a pattern used to indicate the obstacle that is present in the bird's-eye-view image. Specific properties of the mark image generated at this time will be described hereafter. At step S208, the image processing unit 14 generates a superimposed image in which the mark image generated at step S206 is superimposed onto a position that corresponds to the obstacle that appears in the bird's-eye-view image generated at step S202. The image processing unit 14 then displays the generated superimposed image in the display unit 13.
(59) Here, the image processing unit 14 is configured to generate the superimposed image by changing the properties of the mark image to be superimposed onto the image of the obstacle, based on the state, such as the shape, tilt, position, and color, of the obstacle in the captured image captured by the camera 11 and identified by the distance measuring unit 12. For example, the properties of the mark image herein includes the shape, size, tilt, flashing, color, concentration, and transparency of the pattern. Hereafter, specific application examples of the mark image to be superimposed onto the image of the obstacle will be described with reference to
FIG. 4: Application Example 1
(60)
(61) In a case in
(62) In a case in
(63) In a case in
(64) In the cases in
FIG. 5: Application Example 2
(65) The image processing unit 14 may be configured to periodically flash the mark image that is displayed so as to overlap the obstacle image. In addition, as shown in examples in
(66) A case in
(67) A case in
(68) A case in
FIG. 6: Application Example 3
(69) As shown in examples in
(70) A case in
(71) A case in
FIG. 7: Application Example 4
(72) The image processing unit 14 may be configured to arrange a mark image for an obstacle that corresponds to a course on which the vehicle 1 is predicted to advance or an area obtained by the vehicle width being extended in frontward and rearward directions of the vehicle length. In this case, the mark image may not be displayed in other areas even when an obstacle is detected. Specifically, the image processing unit 14 predicts the course of the vehicle 1 by acquiring vehicle information that indicates a steering state of the vehicle 1 and the like. The image processing unit 14 then identifies the area of the predicted course or the area in the frontward and rearward directions of the vehicle length in the bird's-eye-view image based on information, such as the vehicle width and the vehicle length of the vehicle 1, registered in advance.
(73) A case in
FIG. 8: Application Example 5
(74) The image processing unit 14 may be configured to arrange a mark image for an obstacle that corresponds to an area with reference to the vehicle width of the vehicle 1, taking into consideration the likelihood of contact between the vehicle 1 and the obstacle. In this case, as shown in an example in
(75) The case in
FIG. 9: Application Example 6
(76) The image processing unit 14 may be configured to arrange a mark image related to the obstacle in an area that is wider than a width of a borderline (also referred to, hereafter, as a detection line) that indicates a shape of an obstacle that is detected by the radar or the sonar of the distance measuring unit 12. Specifically, the image processing unit 14 identifies the width of the detection line of the obstacle that is indicated by the positional information acquired by the distance measuring unit 12. The image processing unit 14 then identifies the area over which the mark image is arranged with reference to the width of the detection line.
(77) A case in
FIG. 10: Application Example 7
(78) The image processing unit 14 may be configured to recognize a border of an obstacle, which has been detected by the radar or the sonar of the distance measuring unit 12, from the bird's-eye-view image using image recognition. The image processing unit 14 may then arrange a mark image related to the obstacle along the recognized border.
(79) Specifically, as shown in an example in
FIG. 11: Application Example 8
(80) The image processing unit 14 may be configured to change the properties (such as the shape, size, color, and transmittance) of the mark image to be superimposed onto an image of an obstacle based on the farness/nearness of the distance between the vehicle 1 and the obstacle. As a result, the driver can accurately ascertain the distance to the obstacle.
(81) A case in
(82) A case in
(83) A case in
FIG. 12: Application Example 9
(84) The image processing unit 14 may be configured to draw the mark image to be superimposed onto an image of an obstacle so as to be extended to an outer edge of a display area of the superimposed image in a direction corresponding to an upper side of the obstacle. Specifically, as shown in an example in
FIG. 13: Application Example 10
(85) The image processing unit 14 may be configured to extend the shape of the mark image to be superimposed onto an image of an obstacle in the bird's-eye-view image in a radiating manner, taking into consideration distortion (such as the image being extended in a radiating manner as the image becomes farther from the center) in the image that occurs when the captured image captured by the camera 11 is converted to the bird's-eye-view image. Specifically, as shown in an example in
FIG. 14: Application Example 11
(86) The image processing unit 14 may be configured to draw the mark image to be superimposed onto an image of an obstacle in a mode in which a lower end side of the obstacle is emphasized. Specifically, as shown in examples in
FIG. 15: Application Example 12
(87) The image processing unit 14 may be configured to recognize the color of the obstacle from the captured image and draw the mark image using a color that corresponds to a complementary color of the recognized color of the obstacle. Specifically, as shown in examples in
FIG. 16: Application Example 13
(88) When the obstacle detected by the distance measuring unit 12 is presumed to be a sloped surface, the image processing unit 14 may be configured to also arrange the mark image in an area further towards the vehicle 1 than the detection line that indicates the border of the detected obstacle.
(89) For example, when the radar or the sonar of the distance measuring unit 12 detects a sloped surface, such as an upward slope, an undetected sloped surface is likely to be continuing towards the vehicle 1 in an area below a lower limit of the detection area of the radar or the sonar in the vertical direction. Therefore, as shown in an example in
FIG. 17: Application Example 14
(90) The image processing unit 14 may be configured to draw the mark image to be superimposed onto an image of an obstacle in a mode in which an area that is actually detected by the distance measuring unit 12 is given more emphasis than other areas. Specifically, as shown in an example in
FIG. 18: Application Example 15
(91) The image processing unit 14 may be configured to display lines (referred to, hereafter, as grid lines) in the form of squares that serve as an indicator of the distance between the vehicle 1 and the obstacle in the superimposed image, based on the farness/nearness of the distance between the vehicle 1 and the obstacle. In addition, the size of the squares formed by the grid lines may be variable, based on the farness/nearness of the distance between the vehicle 1 and the obstacle. As a result, the driver can accurately ascertain the distance to the obstacle.
(92) A case in
(93) A case in
(94) A case in
FIG. 19: Application Example 16
(95) When the obstacle detected by the distance measuring unit 12 is presumed to be a vehicle, the image processing unit 14 may be configured to display, in a superimposing manner, a mark image that is composed of an icon that represents a vehicle so as to match the orientation and size of the vehicle detected as the obstacle.
(96) A case in
(97) As shown in an example in
(98) Alternatively, as shown in an example in
FIG. 20: Application Example 17
(99) When drawing a mark image that is composed of an icon that represents a vehicle, the image processing unit 14 may be configured to use a single representative color that is acquired from an original image of another vehicle onto which the mark image is to be superimposed. Specifically, as shown in an example in
Effects
(100) The following effects are achieved by the onboard display system according to the embodiment.
(101) Based on the shape, such as the tilt and size, of an obstacle that is detected in the periphery of the vehicle 1, the properties, such as the orientation and shape, of the mark image that is the pattern indicating the obstacle can be freely changed. In addition, a display mode, such as the size, color, and flashing, of the mark image can be freely changed based on the distance to the obstacle. As a result of the driver of the vehicle viewing the superimposed image in which the mark image is superimposed onto the image of the obstacle in this way, the driver can easily ascertain the state of the obstacle.
Correspondence to the Configuration According to the Embodiment
(102) The image processing unit 14 corresponds to an example of a display control apparatus. The processes at steps S200 and S202 performed by the image processing unit 14 corresponds to an example of a process as an image acquiring unit. The process at step S206 performed by the image processing unit 14 corresponds to an example of a process as an obstacle identifying unit and a control unit.
Variation Example
(103) A function provided by a single constituent element according to the above-described embodiments may be divided among a plurality of constituent elements. Functions provided by a plurality of constituent elements may be provided by a single constituent element. In addition, a part of a configuration according to the above-described embodiments may be omitted. Furthermore, at least a part of a configuration according to an above-described embodiment may be added to or replace a configuration according to another of the above-described embodiments. Any mode included in the technical concept specified by the wordings of the claims is an embodiment of the present disclosure.
(104) For example, in the application example 1 (see
(105) In addition, according to the above-described embodiment, a case in which, upon conversion of a captured image captured by the camera 11 into a bird's-eye-view image, the mark image is superimposed onto the image of an obstacle in the converted bird's-eye-view image is described. In addition, the bird's-eye-view image, the mark image may be superimposed on the captured image captured by the camera 11 itself. Alternatively, the mark image may be superimposed onto an image obtained by conversion to an image of a perspective other than the bird's-eye-view image.
(106) The present disclosure can also be actualized in various modes, such as a program for enabling a computer to function as the above-described image processing unit 14, and a non-transitory tangible recording medium such as a semiconductor memory, in which the program is recorded.