SLAM SYSTEM AND METHOD FOR VEHICLES USING BUMPER-MOUNTED DUAL LIDAR
20230236323 · 2023-07-27
Assignee
Inventors
- Kyungchang LEE (Busan, KR)
- Jaeheon JANG (Busan, KR)
- Jungho KANG (Busan, KR)
- Hyeongjun KIM (Busan, KR)
- Hyunhee KIM (Busan, KR)
Cpc classification
International classification
G01S7/481
PHYSICS
Abstract
There is provided a simultaneous localization and mapping (SLAM) system including a first LiDAR and a second LiDAR mounted on a vehicle bumper; a LiDAR data merge unit receiving data from the first LiDAR and the second LiDAR, aligning LiDAR times through time synchronization, and then converting the data into a point cloud type and merging the data; an electronic control unit (ECU) providing inertial data of the vehicle for correcting the data merged in the LiDAR data merge unit; and an SLAM unit correcting the data merged in the LiDAR data merge unit by using the inertial data of the vehicle received from the ECU to obtain LiDAR odometry for estimating a movement of the vehicle, generating a 3D map of a road on which the vehicle travels, and extracting a location and a traveling route of the vehicle inside a road.
Claims
1. A simultaneous localization and mapping (SLAM) system for a vehicle using a bumper-mounted dual LiDAR, the SLAM system comprising: a first LiDAR and a second LiDAR mounted on a vehicle bumper to output data for map creation and location recognition; a LiDAR data merge unit receiving data from the first LiDAR and the second LiDAR, aligning LiDAR times through time synchronization, and then converting the data into a point cloud type and merging the data; an electronic control unit (ECU) providing inertial data of the vehicle for correcting the data merged in the LiDAR data merge unit; and an SLAM unit correcting the data merged in the LiDAR data merge unit by using the inertial data of the vehicle received from the ECU to obtain LiDAR odometry for estimating a movement of the vehicle, generating a 3D map of a road on which the vehicle travels, and extracting a location and a traveling route of the vehicle inside a road.
2. The SLAM system of claim 1, wherein raw data of each of the first LiDAR and the second LiDAR is expressed as one integrated coordinate through point cloud merge, and a relative position difference between sensors is obtained in an integrated coordinate system and applied to align all point clouds with the sensors in the corrected coordinate system as the origin.
3. The SLAM system of claim 1, wherein the raw data generated by the first LiDAR and the second LiDAR is received through UDP, and the data of each of the two LiDARs is stored in a buffer, and after times of the LiDARs are aligned through time synchronization, the data is converted into a point cloud type and merged.
4. The SLAM system of claim 1, wherein the LiDAR odometry is obtained using the point cloud data of the first LiDAR and the second LiDAR, and the odometry is calculated through matching between scans using features detected in LiDAR scans, and clustering of the received point cloud is performed in order to reduce the number of point clouds used for detection and minimize a load in an embedded board.
5. The SLAM system of claim 4, wherein, in clustering, a cluster with less than a set number of points is not trusted and not registered, and through this process, a discontinuous noise point is filtered out and only a reliable point is left.
6. The SLAM system of claim 4, wherein, after clustering the input point cloud, smoothness of each point is calculated and divided into edge and planar to extract features, a scan area is divided into a set number of sub-areas and edge and planar extraction is performed for each area to uniformly extract the calculated features, and thereafter, correspondence of the features between two consecutive scans is calculated to obtain the odometry.
7. The SLAM system of claim 6, wherein the odometry is obtained by calculating a transform matrix between the features having correspondence, and at this time, in order to solve the transform matrix as an optimization problem, optimization is performed with edge correspondence and planar correspondence as costs.
8. The SLAM system of claim 7, wherein, in the optimization process, a change of a z-axis in the odometry of the vehicle and a roll and pitch are measured through matching between the scans measured by LiDAR, when calculating a movement of the vehicle in x and y directions on the road, a route estimation value is provided using imu data of the vehicle to complement the odometry calculation, and data on longitudinal acceleration T.sub.x, lateral acceleration T.sub.y, and yaw rate θ.sub.yaw are output from the ECU of the vehicle, based on which T.sub.x, T.sub.y, θ.sub.yaw, which are x, y-axis movement and yaw rotation of the vehicle, are corrected.
9. A simultaneous localization and mapping (SLAM) method for a vehicle using a bumper-mounted dual LiDAR, the SLAM method comprising: receiving data for map creation and location recognition from a first LiDAR and a second LiDAR mounted on a vehicle bumper, aligning LiDAR times through time synchronization, and then converting the data into a point cloud type and merging the data; receiving inertial data of the vehicle and obtaining LiDAR odometry for estimating a movement of the vehicle using point clouds recognized by the first LiDAR and the second LiDAR; and generating a 3D point cloud map by registering the LiDAR point cloud in a global map according to the obtained odometry by performing SLAM using acceleration data output in a CAN format from an electronic control unit (ECU) inside the vehicle in order to increase precision of odometry and reduce the time required for calculation.
10. The SLAM method of claim 9, wherein the LiDAR odometry is obtained using the point cloud data of the first LiDAR and the second LiDAR, and the odometry is calculated through matching between scans using features detected in LiDAR scans, and clustering of the received point cloud is performed in order to reduce the number of point clouds used for detection and minimize a load in an embedded board.
11. The SLAM method of claim 10, wherein, in clustering, a cluster with less than a set number of points is not trusted and not registered, and through this process, a discontinuous noise point is filtered out and only a reliable point is left.
12. The SLAM method of claim 10, wherein after clustering the input point cloud, smoothness of each point is calculated and divided into edge and planar to extract features, a scan area is divided into a set number of sub-areas and an edge and a planar are extracted for each area to uniformly extract the calculated features, and thereafter, correspondence of the features between two consecutive scans is calculated to obtain the odometry.
13. The SLAM method of claim 12, wherein the odometry is obtained by calculating a transform matrix between the features having correspondence, and at this time, in order to solve the transform matrix as an optimization problem, optimization is performed with edge correspondence and planar correspondence as costs.
14. The SLAM method of claim 13, wherein, in the optimization process, a change of a z-axis in the odometry of the vehicle and a roll and pitch are measured through matching between the scans measured by LiDAR, when calculating a movement of the vehicle in x and y directions on the road, a route estimation value is provided using imu data of the vehicle to complement the odometry calculation, and data on longitudinal acceleration T.sub.x, lateral acceleration T.sub.y, and θ.sub.yaw rate are output from the ECU of the vehicle, based on which T.sub.x, T.sub.y, θ.sub.yaw which are x, y-axis movement and yaw rotation of the vehicle are corrected.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
DETAILED DESCRIPTION
[0068] Hereinafter, an embodiment of a simultaneous localization and mapping (SLAM) system and method for a vehicle using a bumper-mounted dual LiDAR according to the present disclosure will be described in detail.
[0069] Features and advantages of the SLAM system and method for a vehicle using a bumper-mounted dual LiDAR according to the present disclosure will become clear through detailed description of each embodiment below.
[0070]
[0071] Although the terms used in this specification are selected, as much as possible, from general terms that are widely used at present while taking into consideration the functions of the elements obtained in accordance with one embodiment, these terms may be replaced by other terms based on intentions of those skilled in the art, customs, emergence of new technologies, or the like. In addition, in certain instances, terms that are arbitrarily selected by the applicant may be used. In this case, meanings of these terms will be disclosed in detail in the corresponding part of the description of the invention. Accordingly, the terms used herein should be defined based on practical meanings thereof and the whole content of this specification, rather than based on names of the terms.
[0072] Throughout the specification, unless explicitly described to the contrary, the word “comprise” and variations such as “comprises” or “comprising”, will be understood to imply the inclusion of stated elements but not the exclusion of any other elements. Further, in the present disclosure, a ‘module’ or a ‘unit’ performs at least one function or operation and may be implemented by hardware or software or a combination of the hardware and the software.
[0073] In particular, units processing at least one function or operation may be implemented as an electronic device including at least one processor, and at least one peripheral device may be connected to the electronic device according to a method of processing the function or operation. Peripheral devices may include a data input device, a data output device, and a data storage device.
[0074] A SLAM system and method for a vehicle using a bumper-mounted dual LiDAR according to the present disclosure enables precise map creation and location recognition in an embedded board using bumper-mounted dual LiDAR and inertial data of a vehicle ECU.
[0075] To this end, in order to solve an aerodynamic problem and the drop-off problem caused by a LiDAR mounted on a roof of a vehicle, the present disclosure may include a configuration in which two LiDARs are mounted on a bumper to efficiently create a three-dimensional (3D) map through SLAM.
[0076] As shown in
[0077] As shown in
[0078] Here, raw data of each of the first LiDAR 10 and the second LiDAR 20 is expressed as one integrated coordinate through point cloud merge. A relative position difference between sensors is obtained in an integrated coordinate system and applied to align all point clouds with the sensors in the corrected coordinate system as the origin.
[0079] At this time, the raw data generated by the LiDARs is received through UDP, and the data of each of the two LiDARs is stored in a buffer, and after times of the LiDARs are aligned through time synchronization, the data is converted into a point cloud type and merged.
[0080] Here, the point cloud is a type of data mainly collected by LiDAR and RGB-D sensors (depth cameras). These sensors send light, ultrasound, or a laser to an object, measure a time for which reflected light, ultrasound or laser is returned, and calculate a distance per signal to generate one point. This is performed repeatedly or simultaneously according to the resolution of the sensor to generate a large number of points. The number of generated points varies depending on the precision of the sensor, but a point cloud refers to a set cloud of numerous points spread in a 3D space.
[0081] Such a point cloud is arranged in a 3D array based on the sensor, and since each point has data (depth), the points are automatically rotated and arranged based on the sensor even without considering scale or rotation of the points in utilizing the data, so that vast amounts of point cloud data may be easily used.
[0082] In the SLAM system for a vehicle using a bumper-mounted dual LiDAR, a map and location is with the LiDAR mounted on the vehicle bumper and inertial information output from the inside of the vehicle.
[0083] For the safety of the vehicle traveling on the road, it is preferable to mount the LiDAR through an aluminum profile and a separate mount on the bumper of the vehicle.
[0084] In order to expand a recognition range of the LiDAR in the bumper of the vehicle, an operation of merging point cloud data in each LiDAR is performed using two LiDARs LiDAR odometry that estimates a movement of the vehicle using the point cloud recognized by LiDAR is obtained.
[0085] In order to increase the precision of odometry and reduce the time required for calculation, SLAM is performed using vehicle acceleration data output in a CAN format from the ECU inside the vehicle.
[0086] A SLAM method for a vehicle using a bumper-mounted dual LiDAR according to the present disclosure is as follows.
[0087]
[0088] The vehicle SLAM method using the bumper-mounted dual LiDAR according to the present disclosure includes receiving data for map creation and location recognition from the first LiDAR 10 and the second LiDAR 20 mounted on a vehicle bumper, aligning LiDAR times through time synchronization, and then converting the data into a point cloud type and merging the data, receiving inertial data of the vehicle and obtaining LiDAR odometry for estimating a movement of the vehicle using point clouds recognized by the first LiDAR 10 and the second LiDAR 20, and generating a 3D point cloud map by registering the LiDAR point cloud in a global map according to the obtained odometry by performing SLAM using acceleration data output in a CAN format from an ECU inside the vehicle in order to increase precision of odometry and reduce the time required for calculation.
[0089] Taking the safe driving of the vehicle into consideration, two LiDARs are mounted on the bumper of the vehicle, and SLAM is performed with data output from the vehicle ECU to increase applicability of SLAM performed in the vehicle and to secure versatility.
[0090] SLAM is performed by inputting an output from the LiDAR merging system merging point cloud data of dual LiDAR and longitudinal acceleration, lateral acceleration, and yaw rate of the vehicle in a traveling environment to an embedded control board through the ECU of the vehicle.
[0091] Odometry for SLAM is obtained through matching between LIDAR scans, and an initial value at this time is calculated through ECU data. According to the odometry obtained through this process, the LiDAR point cloud is registered in a global map to create a 3D point cloud map.
[0092] The input of the dual LiDAR is described in detail as follows.
[0093] Unlike mobile robots or AGVs, vehicles travel at high speeds, so the aerodynamic design of vehicles is very important in terms of both vehicle driving stability and vehicle fuel efficiency.
[0094] In addition, when an accident occurs, depending on a location of the mounted LiDAR, loss of the sensor may cause human injury or secondary accidents. Therefore, the LiDAR of the vehicle should be mounted on the bumper of the vehicle, not on the roof as is the case in most cases.
[0095] However, when the LIDAR is mounted on the bumper, a viewing angle is limited, unlike when it is installed on the roof of the vehicle. Therefore, two LiDARs are installed in the bumper to secure a maximum viewing angle in the bumper.
[0096] Raw data of each LiDAR is expressed as one integrated coordinate through point cloud merge.
[0097] In the integrated coordinate system, a relative position difference between the sensors is obtained and applied to align all point clouds with the sensor in the corrected coordinate system as the origin.
[0098] At this time, the raw data generated by the LiDAR is received through UDP, and the data of each of the two LiDARs is stored in a buffer, and after times of the LiDARs are aligned through time synchronization, the data is converted into a point cloud type and merged.
[0099]
[0100] The odometry of the vehicle is obtained using LiDAR point cloud data, and the odometry is calculated through matching between scans using features detected from LiDAR scans.
[0101] In order to reduce the number of point clouds used for detection and minimize a load in the embedded board, clustering of the input point clouds is performed.
[0102] In clustering, a cluster with less than 30 points is not trusted and not registered, and through this process, points such as discontinuous noise, such as small objects, such as leaves and paper shaking in the wind are filtered out and only a reliable point such as tree trunks and poles is left.
[0103] Thereafter, for feature extraction, smoothness of each point is calculated and classified into edge and planar.
[0104] In order to uniformly extract the calculated features, a scan area is divided into 6 sub-areas, and edge and planar extraction is performed for each area.
[0105] Thereafter, correspondence of the features between two consecutive scans is calculated to obtain the odometry.
[0106] Finally, the odometry is obtained by calculating a transform matrix between the features having correspondence. At this time, in order to solve the transform matrix as an optimization problem, if a distance of the correspondence becomes close, it means that registration has been properly performed, and optimization is performed with edge correspondence and planar correspondence as costs.
[0107] A process of applying ECU data is described as follows.
[0108]
[0109] In the present disclosure, in order to calculate the odometry of SLAM, the odometry is calculated through an optimization technique with the distance of the feature obtained through LIDAR scan as a cost.
[0110] In the case of a vehicle traveling on the ground, there is only a change in altitude along the road in the case of a z-axis, which is perpendicular to the ground. Therefore, the change of the z-axis in the odometry of the vehicle and the roll and pitch are measured through matching between the scans measured by LiDAR.
[0111] However, when calculating a movement of the vehicle in x and y directions on the road, a route estimation value is provided using imu data of the vehicle to complement the odometry calculation.
[0112] Data on longitudinal acceleration T.sub.x, lateral acceleration T.sub.y, and θ.sub.yaw rate are output from the ECU of the vehicle, based on which T.sub.x, T.sub.y, θ.sub.yaw, which are x, y-axis movement and yaw rotation of the vehicle, are corrected.
[0113] As shown in
[0114] An embedded environment establishment and sensor installation in a vehicle are described as follows.
[0115]
[0116] Most of the currently researched LiDAR data confirmation, range setting, SLAM, mapping, and map registration have been performed on the middleware called ROS in the PC environment. However, in the present disclosure, both LIDAR and SLAM used to implement SLAM that is directly executed in a vehicle are executed on an embedded board for a vehicle.
[0117] ROS is middleware that runs on an OS, such as Ubuntu, and provides communication between nodes of each program and time synchronization using ROS Time, but the use of TCP/IP in communication and various visualization and simulation tools are not required in the vehicle and only causes an increase in the amount of computation.
[0118] The board used in an embodiment of the present disclosure is a vehicle embedded board using an ARM processor, and includes an Ethernet (RJ45) input unit and a CAN input unit.
[0119] Therefore, the LiDAR input is received as UDP through an Ethernet port, and in the case of the IMU, CAN data of the ECU is converted through CANoe into a desired unit standard, and then transmitted to the embedded board through UDP again. By implementing this structure in the embedded board, the actual map and vehicle route are output so that they may be actually driven and applied to the vehicle.
[0120]
[0121] The performance evaluation results of the SLAM system and method for a vehicle using a bumper-mounted dual LiDAR according to the present disclosure are as follows.
[0122] For a LiDAR used for performance evaluation, Velodyne's VLP-16 model was used. T VLP-16 model has 16 Z-direction channels and may detect 360°. In the experiment, LiDAR scan was performed at 10 Hz, and at this time, x-axis resolution of each point was 0.2°, and 1800 points over 360° were measured in a single channel.
[0123] The results of the viewing angle difference according to the position of the LiDAR are shown in Table 1.
TABLE-US-00001 TABLE 1 roof bumper viewing angle 360° 178° detected feature 18 10 scan channel of 5 7 floor surface
[0124] For comparison of the viewing angles, viewing angles measured for 1 second were compared while objects, such as trees, pillars, and walls of buildings therearound were stopped at sufficient positions.
[0125] The number of objects output from the LiDAR indicated only objects that could be distinguished with the naked eye. In the case of the viewing angle, omnidirectional recognition was possible on the roof, but when mounted on the bumper, only the front (178°) of the vehicle was recognized.
[0126] In addition, even in the case of recognized objects, it may be confirmed that a recognition rate in the bumper is low, with the object recognition rate ranging from roof: 18 to bumper: 10 due to the restriction of the narrow viewing angle.
[0127] Finally, it can be seen that, among the 16 channels, the number of scans for the floor surface when leveling with the ground is 7 for the bumper and 5 for the roof, and the roof has a lower number than the bumper, and this is because some channels scan too far away when the LiDAR is installed on the roof, which is higher than that installed in the bumper.
[0128]
[0129] Comparison between the use of one LiDAR and the use of two LiDARs to compensate for the low viewing angle in the bumper is as follows.
TABLE-US-00002 TABLE 2 single LiDAR dual LiDAR viewing angle 178° 290° detected feature 10 14 scan channel of 7 7 floor surface
TABLE-US-00003 TABLE 3 single LiDAR dual LiDAR before enter corner 4 7 during cornering 2 5 immediately before 1 3 existing corner
[0130] In the case of using two LiDARs, each is installed on the corner of each bumper, a central portion along a streamlined design of the bumper overlaps, and a portion of the rear toward the end of the bumper may be recognized.
[0131] Therefore, the viewing angle increased by 110° compared to when one LIDAR was installed. As a result, it can be seen that the number of objects recognized is also 14, an increase of 4 compared to the existing single LiDAR.
[0132] This additional posterolateral view expands the range in which trees, pillars, and corners of buildings passing by the vehicle may be recognized as features in matching between scans when extracting the odometry of the vehicle.
[0133] In addition, when a vehicle makes a left or right turn, measurement may be made for a relatively long time until the feature is lost, and the feature of the turning direction may be maintained even after exiting a corner, which is advantageous for tracking the vehicle's rotation in odometry.
[0134] Due to the difference in mounting height, the LiDAR mounted on the bumper is more advantageous in detecting an object in front of the vehicle than the LiDAR installed on the roof.
[0135] In
[0136]
[0137] In order to confirm the difference in performance of the dual LiDAR installed on the bumper compared to the single roof LiDAR, an experiment was conducted by driving while making a roof.
[0138] In the experiment, a final map created by driving the same route using the vehicle and odometry, the traveling route of the vehicle, were compared.
[0139] It can be seen that the LiDAR installed on the roof shows slightly better results. However, considering that two LiDARs installed on the bumper have a narrower field of view by approximately 22% compared to the LiDAR installed on the roof, the result of installing two LiDARs on the bumper shows that fairly high precision is maintained by twisting 14.6° at one corner compared to the LiDAR installed on the roof.
[0140]
[0141] The difference in odometry according to the feature selection is as follows.
[0142] In estimating a position of the vehicle, the output varies according to the change in the number of features detected in the scan, as well as the viewing angle and point of the LiDAR.
[0143] In order to confirm a corresponding difference, the LiDAR was mounted on the roof of the vehicle and a change in odometry according to the number of features was measured by changing a detected feature. The number of features actually extracted according to the changed in the feature was measured.
[0144] All experiments were conducted in the same place, and there is only a difference between the detected edge and planar. For visualization, a visualization tool called Rviz of ROS was used and displayed below.
[0145] As a result of the experiment, it was confirmed that, in the case of outputting 1 edge and 1 planar and in the case of outputting 1 edge and 2 planars per sub-image (60°), the number of planars recognized by each was changed. It was also confirmed that, as the number of planar recognized in the sub-image changed from 1 to 2, the number of planars (yellow dots) in the entire image input for calculation in
[0146] To confirm that the odometry changes as the number of features is varied, SLAM was conducted through the same bag file while varying the number of features.
[0147] The results at this time are as shown in Table 4, and it was confirmed that the loop-closer succeeded and showed better results when the number of features was small. In addition, it was confirmed that loop-closer does not run or becomes closer to another point when a feature is recognized above a certain level. Therefore, the number of edges and planar features used for localization was maintained at 2:6 per sub-area (60°).
TABLE-US-00004 TABLE 4 Absolute pose error Edge Planar max mean min rmse closer 1 2 14.42 6.32 0.00 7.70 ∘ 4 15.67 6.99 0.03 8.35 ∘ 6 15.82 7.01 0.03 8.40 ∘ 8 15.80 6.99 0.03 8.38 ∘ 10 15.79 6.98 0.03 8.38 ∘ 2 2 14.93 6.57 0.00 7.92 ∘ 4 14.42 6.19 0.00 7.59 ∘ 6 14.46 6.22 0.00 7.57 ∘ 8 14.39 6.15 0.00 7.55 ∘ 10 15.55 7.02 0.02 8.39 ∘ 4 2 14.69 6.05 0.02 7.47 ∘ 4 14.58 6.16 0.02 7.54 ∘ 6 14.51 6.37 0.00 7.70 ∘ 8 46.11 14.57 0.02 19.13 x 10 14.63 6.34 0.00 7.70 ∘ 6 2 14.33 6.28 0.00 7.57 ∘ 4 74.01 23.47 0.01 32.04 x 6 72.94 23.31 0.11 31.64 x 8 14.21 5.88 0.04 7.27 ∘ 10 14.46 6.40 0.00 7.71 ∘ 8 2 14.75 6.46 0.00 7.85 ∘ 4 14.47 6.23 0.01 7.67 ∘ 6 14.44 6.04 0.04 7.43 ∘ 8 73.09 23.21 0.01 31.59 x 10 74.09 23.74 0.04 31.44 x All 23.08 8.77 0.02 11.12 ∘
[0148]
[0149] In order to confirm the effect of providing initial values for vehicle movement route estimation using data output from the ECU on the actual vehicle movement route estimation, routes with and without ECU data were compared.
[0150] The results of the experiment, which were compared after driving along a pre-determined route, showed an average error of 91 m when the initial position was estimated using only the LiDAR throughout the driving.
[0151] A maximum error that occurs is 207 m, and the error mainly occurred in a sharp corner portion. However, when the initial position was estimated using the ECU and the LIDAR together, an average error was 24 m, and a maximum error was 41 m. This error tended to increase when driving at high speed and was corrected when ICP and loop closure were continuously performed by stopping for a sufficient time after driving. However, loop closure through ICP was not possible when too large an error occurred in the initial position.
TABLE-US-00005 TABLE 5 Initial value Initial value provided not provided average error of route 24.3 m 91.6 m maximum error of route 41.8 m 207.9 m rmse of route 29.4 m 110.0 m
[0152] Through the ARM board, input/output of various sensors and algorithm calculation are performed to check a vehicle location and map output. In the experiment, both the dual LiDAR of the bumper and the vehicle ECU data were used. The vehicle was driven in several environments and tested on real roads. The vehicle drove 0.9 km, 4.8 km, and 4.1 km, respectively, and the traveling route and generated map were compared with a satellite map.
[0153]
[0154] Each environment has different characteristics, and in the first embodiment, there is a difference in height on the route and a route with many steep slopes (1) and sharp turns (2) are included.
[0155] In addition, there is also a section (3) in which no feature is output because one side is blocked by a wall in some sections.
[0156] In the case of the second embodiment, it is flat and 90° rotation occurs frequently. In addition, as all routes overlapped in a central circular rotary (1), it was conducted to test a cumulative error according to each rotation. In the course of driving, loops meet at the central roundabout and some intersections to form a loop closure, so it includes 4 small loops. In the last case, 4.1 km of the longest single run was done at once without a roof closure. Among the routes, the long straight road (1) includes terrain with a very small number of objects.
[0157] As a result of the experiment, it was confirmed that the map was output normally in all areas, and it was confirmed that the cumulative error correction on the map operated normally through the loop closure.
[0158] In the SLAM system and method for a vehicle using a bumper-mounted dual LiDAR according to the present disclosure described above, by integrating data of two LiDARs mounted on a vehicle bumper, processing the data with one time and a unified coordinate system, receiving the same, extracting information on a traveling route of the vehicle, performing correction using inertial data of the vehicle output from an ECU inside the vehicle, initiating point cloud data of LiDAR along a movement route, a precise map may be created without adding a sensor.
[0159] As described above, it will be understood that the present disclosure is implemented in a modified form without departing from the essential characteristics of the present disclosure.
[0160] Therefore, the specified embodiments should be considered from an explanatory point of view rather than a limiting point of view, the scope of the present disclosure is shown in the claims rather than the foregoing description, and all differences within the equivalent range are considered to be included in the present disclosure.
DESCRIPTION OF REFERENCE NUMERALS
[0161]
TABLE-US-00006 10. First LiDAR 20. Second LiDAR 30. LiDAR data merge unit 40. ECU 50. SLAM unit