OBJECT SENSING DEVICE AND OBJECT SENSING METHOD
20230221410 · 2023-07-13
Assignee
Inventors
Cpc classification
G06V20/58
PHYSICS
G06T7/521
PHYSICS
G06V20/588
PHYSICS
G01S7/4802
PHYSICS
International classification
G01S17/86
PHYSICS
G06V20/58
PHYSICS
G06V20/56
PHYSICS
Abstract
An object of the present invention is to provide an object sensing device that classifies an observation point group output by a LiDAR into a real image and a mirror image when a road surface around an own vehicle is wet, and can use the mirror image for detecting the real image. An object sensing device that detects an object around a vehicle based on a point cloud data of an observation point observed by a LiDAR mounted on the vehicle includes: a road surface shape estimation unit that estimates a shape of a road surface; a road surface condition estimation unit that estimates a dry/wet situation of the road surface; and an observation point determination unit that determines a low observation point observed at a position lower than the estimated road surface by a predetermined amount or more when the road surface is estimated to be in a wet situation. The object is detected by using point cloud data of the observation points other than the low observation point and point cloud data of an inverted observation point obtained by inverting the low observation point with reference to a height of the road surface.
Claims
1. An object sensing device that detects an object around a vehicle based on a point cloud data of an observation point observed by a LiDAR mounted on the vehicle, the object sensing device comprising: a road surface shape estimation unit that estimates a shape of a road surface; a road surface condition estimation unit that estimates a dry/wet situation of the road surface; and an observation point determination unit that determines a low observation point observed at a position lower than the estimated road surface by a predetermined amount or more when the road surface is estimated to be in a wet situation, wherein the object is detected by using point cloud data of the observation points other than the low observation point and point cloud data of an inverted observation point obtained by inverting the low observation point with reference to a height of the road surface.
2. The object sensing device according to claim 1, wherein the observation point determination unit determines the low observation point as a mirror image observation point in a case where another observation point exists around the inverted observation point.
3. The object sensing device according to claim 1, further comprising: a grouping unit that generates a group including both the inverted observation point and the observation point according to a correlation of the point cloud data.
4. The object sensing device according to claim 2, further comprising: a grouping unit that creates a mirror image group of only the mirror image observation points and a real image group of only the observation points other than the mirror image observation points according to a correlation of the point cloud data.
5. The object sensing device according to claim 4, wherein the grouping unit inverts a position and a size of the mirror image group so as to correspond to the real image group, and integrates and holds information of positions and sizes of both groups.
6. The object sensing device according to claim 1, wherein a captured image of a camera sensor mounted on a vehicle is further input, and in object information of the captured image, object information corresponding to the low observation point is determined as object information corresponding to a mirror image.
7. An object sensing method for detecting an object around a vehicle based on point cloud data of an observation point observed by a LiDAR mounted on the vehicle, the object sensing method comprising: estimating a shape of a road surface; estimating a dry/wet situation of the road surface; determining a low observation point observed at a position lower than the estimated road surface by a predetermined amount or more when the road surface is estimated as in a wet situation; and detecting the object by using point cloud data of the observation points other than the low observation point and point cloud data of an inverted observation point obtained by inverting the low observation point with reference to a height of the road surface.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
DESCRIPTION OF EMBODIMENTS
[0025] Hereinafter, embodiments of the present invention will be described with reference to the drawings.
First Embodiment
[0026] First, a first embodiment of the present invention will be described with reference to
[0027]
<LiDAR 1 Behavior in Good Weather>
[0028]
[0029] The LiDAR 1 can observe a plurality of observation points P in the vertical direction by radiating a plurality of laser beams radially and discretely within a recognizable angular range in the vertical direction. Similarly, even within the recognizable angular range in the horizontal direction, a plurality of observation points P in the horizontal direction can be observed by emitting a plurality of laser beams radially and discretely.
[0030] A set of a large number of observation points P observed by the LiDAR 1 in this manner is input to the object sensing device 2 as point cloud data D having three-dimensional coordinate information. In
<Behavior of LiDAR 1 in Rainy Weather>
[0031] On the other hand,
[0032] The LiDAR 1 calculates three-dimensional coordinate values (x, y, z) of the observation point P using a trigonometric function or the like according to an irradiation angle at the time of irradiation with laser beam and distance information to the observed object. Therefore, the coordinate value of the observation point P observed through the real orbit L reflected by the wet road surface R.sub.W is calculated at the position of the non-existing mirror image observation point P.sub.M on a linear imaginary trajectory L′.
<Extraction Processing of Mirror Image Observation Point>
[0033] If the mirror image observation point P.sub.M illustrated in
[0034] In Step S1, the LiDAR 1 acquires the point cloud data D. The point cloud data D acquired here is coordinate information of each point of the observation point P as illustrated in
[0035] In Step S2, the road surface shape estimation unit 21 of the object sensing device 2 estimates the shape of the road surface R around the own vehicle V.sub.0 (hereinafter, referred to as “estimated road surface shape F.sub.R”). The estimated road surface shape F.sub.R can be estimated by various methods. For example, any one of the following methods can be used.
[0036] (1) The road surface shape is estimated based on the posture of the own vehicle V.sub.0 calculated from the output of an inertial sensor that three-dimensionally detects the acceleration and the angular velocity.
[0037] (2) The road surface shape is estimated by analyzing the captured image of a camera sensor.
[0038] (3) The road surface shape is estimated by analyzing the point cloud data D acquired by the LiDAR 1.
[0039] (4) The road surface shape registered in the map data is acquired based on the current position of the own vehicle V.sub.0.
[0040] In Step S3, the road surface condition estimation unit 22 of the object sensing device 2 estimates the dry/wet situation of the road surface R around the own vehicle V.sub.0. The dry/wet situation of the road surface R can be estimated by various methods. For example, any one of the following methods can be used.
[0041] (1) The operating signal of the wiper is used as rainfall information. When the wiper is in operation, it is regarded as being raining, and when the wiper is not in operation, it is regarded as not being raining. Then, if it is considered as being raining, it is determined that the road surface is wet.
[0042] (2) The output of a raindrop sensor that detects the wet state of the own vehicle V.sub.0 is used. In a case where the raindrop sensor detects a raindrop, it is regarded that it is raining, and in other cases, it is regarded that it is not raining. Then, if it is considered as being raining, it is determined that the road surface is wet.
[0043] (3) Weather condition data is acquired via the Internet or the like. In this case, not only the current weather indicated by the weather condition data but also the weather condition data from the past to the present may be used to determine the wet situation of the current road surface.
[0044] (4) The wet state or the weather of the road surface R is determined by analyzing the captured image of the camera sensor.
[0045] (5) A low observation point P.sub.L to be described later is extracted from all the observation points P included in the point cloud data D. When the ratio of the low observation points P.sub.L to all the observation points P exceeds a predetermined threshold value, it is determined that the road surface is wet.
[0046] It is determined whether the current road surface condition is wet by any one of these methods or a combination thereof. When it is determined that the road surface is wet, it is desirable to hold the determination result for a predetermined time. This is because even when the weather changes from rainy weather to good weather, it takes some time for the wet road surface to dry.
[0047] In Step S4, the observation point determination unit 23 of the object sensing device 2 checks the determination result of the road surface condition estimation unit 22. When the road surface R is in the wet state, the process proceeds to Step S5, and when the road surface R is not in the wet state, the process returns to Step S1. The reason why the process returns to Step S1 when the road surface R is not in the wet state is that if the road surface R is not in the wet state, it is considered that the coordinate value of the mirror image observation point P.sub.M is not included in the point cloud data D acquired by the LiDAR 1 (see
[0048] In Step S5, the observation point determination unit 23 extracts the low observation point P.sub.L at a position sufficiently lower than the estimated road surface shape F.sub.R using the information of the estimated road surface shape F.sub.R estimated in Step S2 and an arbitrary threshold Th.sub.1.
[0049]
[0050] The following Expression 1 is used to determine whether the observation point P having certain coordinate values (x′, y′, z′) observed by the LiDAR 1 corresponds to the low observation point P.sub.L.
z′<H.sub.R−Th.sub.1 (Expression 1)
[0051] H.sub.R: height of estimated road surface shape F.sub.R at coordinates (x′, y′)
[0052] In a case where (Expression 1) is satisfied, the observation point determination unit 23 determines the observation point P as the low observation point P.sub.L and holds the determination result.
[0053] In the above description, the threshold value Th.sub.1 has been described as a constant value, but the threshold value Th.sub.1 may be a variable. For example, the threshold Th.sub.1 may be set as a function of a relative distance d from the own vehicle V.sub.0 on the basis of a mathematical model or a data table. In a case where the threshold Th.sub.1 is proportional to the distance from the own vehicle V.sub.0, the possibility that the far observation point P is determined to be the low observation point P.sub.L is lower than the near observation point P. This is a countermeasure against degradation of the accuracy of the estimated road surface shape F.sub.R and degradation of the extraction accuracy of the low observation point P.sub.L as the distance from the own vehicle V.sub.0 increases.
[0054] In Step S6, as preprocessing for determining whether the low observation point P.sub.L is the mirror image observation point P.sub.M, the observation point determination unit 23 inverts the low observation point P.sub.L with reference to the estimated road surface shape F.sub.R and generates an inverted observation point P.sub.R.
[0055]
[0056] In Step S7, the observation point determination unit 23 checks whether another observation point P exists in the vicinity of the inverted observation point P.sub.R. Then, if there is another observation point P, the process proceeds to Step S8, and if there is no other observation point P, the process returns to Step S1. The reason why the process returns to Step S1 in a case where there is no other observation point P in the vicinity of the inverted observation point P.sub.R is that, under the environment where the mirror image observation point P.sub.M is generated, another observation point P should exist in the vicinity of the inverted observation point P.sub.R (see
[0057]
[0058] Finally, in Step S8, the observation point determination unit 23 holds the low observation point as the mirror image observation point in Step S7.
<Grouping Processing>
[0059] When the processing in the observation point determination unit 23 is completed, the grouping unit 24 performs grouping processing for using the plurality of observation points P determined as the mirror image observation points P.sub.M for detection of the object.
[0060]
[0061] The grouping determination is sequentially performed on all the observation points P, and when the observation point P to be determined is the mirror image observation point P.sub.M (S11), the same inversion operation of the height information as in
[0062] As a correlation evaluation method, for example, there is a method in which a relative distance is evaluated, and when there are other observation points P closer than an arbitrary distance, the observation point groups adjacent thereto are grouped as an observation point group having a strong correlation in which the same object is detected.
[0063] The group G generated by the grouping unit 24 is sequentially held. After the grouping determination is performed on all the observation points (S14), the extracted group G is transmitted to the recognition processing device 3 at the subsequent stage. By using this method, since the mirror image observation point P.sub.M can be treated as an observation point of the real image, the number of observation points P at which the same object is detected is increased as compared with the case where similar processing is performed only at the observation point P of the real image, and the information becomes dense, so that there is an effect that the recognition accuracy is increased.
<Other Grouping Processing>
[0064] The grouping processing by the grouping unit 24 may be in another form as illustrated in
[0065] The grouping determination is sequentially performed for all the observation points P, and in a case where the observation point P to be determined is the mirror image observation point P.sub.M (S21), a correlation with other mirror image observation points P.sub.M is evaluated and grouping is performed (S23). As a correlation evaluation method here, for example, there is a method in which a relative distance is evaluated, and when there are other mirror image observation points P.sub.M in the vicinity of an arbitrary distance, those adjacent mirror image observation points P.sub.M are grouped as a group G.sub.M of an observation point group having a strong correlation in which a mirror image of the same object is detected (
[0066] In a case where the observation point P to be determined is the observation point P of the real image, grouping is performed as the group G of the observation point group of the real image in the vicinity of the observation point P (S22).
[0067] As a result of performing the above grouping determination on all the observation points P (S24), the group G.sub.M of the mirror image observation point group and the group G of the real image observation point group are obtained. Each grouping result is transmitted to the recognition processing device 3 in the subsequent stage. Alternatively, the result of integrating the respective grouping results may be transmitted to the recognition processing device 3 at the subsequent stage, and in this case, the coordinate information of the observation point group and the size information of the grouping result are inverted and used as described above so that the group G.sub.M of the mirror image observation point group becomes the information corresponding to the group G of the real image observation point group (S25). The group G.sub.M of the mirror image observation point group is integrated into the group G of the observation point group of the real image corresponding to the inverted one (S26). By using this method, the recognition processing device 3 at the subsequent stage can distinguish and manage the group G of the observation point group of the real image and the inverted group G.sub.M of the observation point group of the mirror image, and thus, there is an effect that the recognition accuracy is increased by performing processing suitable for each group.
[0068] As described above, according to the present embodiment, since the observation point group output by the LiDAR when the road surface in the vicinity of the own vehicle is wet can be classified into the real image and the mirror image, it is possible to avoid misunderstanding the mirror image as the real image.
Second Embodiment
[0069] Next, a second embodiment of the present invention will be described. Description of some points in common with the first embodiment will be omitted.
[0070] A mirror image on a wet road surface can be perceived as human vision, but a similar phenomenon is observed in a camera sensor. Therefore, even in the object sensing device using the camera sensor, there is a problem that the mirror image is misunderstood as a real image or the like, but this problem can be solved by using the LiDAR 1 together.
[0071] Therefore, first, in order to associate the range in which the camera sensor recognizes the outside world defined by the installation posture of the camera sensor with the range in which the LiDAR 1 recognizes the outside world, the position information detected by each sensor is converted as a spatial representation in which the same three-dimensional orthogonal coordinate system is shared between the sensors. At this time, the spatial representation method shared between the sensors is not limited to the three-dimensional orthogonal coordinate system, and for example, a polar coordinate system or a two-dimensional plane may be used. By superimposing each detection result on the shared spatial representation, it is possible to determine whether the object detected by the camera sensor is a real image or erroneous detection in which a mirror image is detected.
[0072]
[0073] The shared recognition space in
[0074] Alternatively, the grouping result detected by the object sensing device 2 may be used. In this case, in a case where the group G of the real image observation point groups detected by the object sensing device 2 and the group G.sub.M of the mirror image observation point groups are included in the region where the object information 91 and 92 is recognized as an object, or in a case where the respective detection regions overlap with each other, the group G of the real image observation point groups and the group G.sub.M of the mirror image observation point groups are calculated for each detection object, and the determination is performed using an arbitrarily set value as a threshold. In a case where the group G.sub.M of the mirror image observation point groups is included at a rate equal to or larger than the threshold, it can be determined that the mirror image is erroneously detected in the object information 92.
[0075] By performing object detection using the LiDAR 1, the camera sensor image, and the present invention in combination in this manner, it is possible to reduce detection errors as compared with object detection using a conventional camera sensor image.
REFERENCE SIGNS LIST
[0076] 100 object sensing system [0077] 1 LiDAR [0078] 2 OBJECT SENSING DEVICE [0079] 21 ROAD SURFACE SHAPE ESTIMATION UNIT [0080] 22 ROAD SURFACE CONDITION ESTIMATION UNIT [0081] 23 OBSERVATION POINT DETERMINATION UNIT [0082] 24 GROUPING UNIT [0083] 3 RECOGNITION PROCESSING DEVICE [0084] 4 VEHICLE CONTROL DEVICE