DEVICE AND METHOD FOR STEERING VEHICLE
20250136105 ยท 2025-05-01
Assignee
Inventors
- Jheng-Rong Wu (Changhua County, TW)
- Chiung-Hung Chen (New Taipei City, TW)
- Hong-Xian Tsai (Hsinchu City, TW)
- Pei-Jung Liang (Hsinchu County, TW)
Cpc classification
B60W2420/403
PERFORMING OPERATIONS; TRANSPORTING
B60W2552/53
PERFORMING OPERATIONS; TRANSPORTING
B60W2420/503
PERFORMING OPERATIONS; TRANSPORTING
B60W10/20
PERFORMING OPERATIONS; TRANSPORTING
B60W50/00
PERFORMING OPERATIONS; TRANSPORTING
G06F17/17
PHYSICS
International classification
B60W10/20
PERFORMING OPERATIONS; TRANSPORTING
B60W50/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A device and a method for steering a vehicle are provided. The method includes following steps: obtaining a point cloud data of a vehicle through a lidar, obtaining an RGB image of the vehicle through a camera, and obtaining a current speed of the vehicle through a wheel speed sensor; using the current speed and local path way points associated with the point cloud data to obtain a target angle; using the current speed and a central lane distance error associated with the RGB image to obtain a compensator angle; and using the target angle and the compensator angle to obtain a steering command, and steering the vehicle to drive in a lane according to the steering command.
Claims
1. A device for steering a vehicle, comprising: a LiDAR; a camera; a wheel speed sensor; and a processor, coupled to the LiDAR, the camera, and the wheel speed sensor, wherein the processor obtains a point cloud data of the vehicle through the LiDAR, obtains an RGB image of the vehicle through the camera, and obtains a current speed of the vehicle through the wheel speed sensor, the processor uses the current speed and local path way points associated with the point cloud data to obtain a target angle, the processor uses the current speed and a central lane distance error associated with the RGB image to obtain a compensator angle, the processor uses the target angle and the compensator angle to obtain a steering command of the vehicle, and steers the vehicle to drive in a lane according to the steering command.
2. The device for steering the vehicle according to claim 1, further comprising a storage medium coupled to the processor, wherein the storage medium stores an HD map, a vector map, a start point of the vehicle, and a target point of the vehicle, wherein the processor performs pose estimation on the point cloud data, the HD map, and the start point to obtain a current pose of the vehicle; the processor performs global path planning on the vector map, the current pose, and the target point to obtain global path way points of the vehicle; the processor performs local path planning on the current pose and the global path way points to obtain the local path way points of the vehicle.
3. The device for steering the vehicle according to claim 2, wherein the pose estimation comprises Normal Distributions Transform (NDT).
4. The device for steering the vehicle according to claim 2, wherein the global path planning comprises Trajectory Planning.
5. The device for steering the vehicle according to claim 2, wherein the local path planning comprises Roll-Out Generation.
6. The device for steering the vehicle according to claim 1, wherein the processor performs a target angle calculation operation on the current speed and the local path way points to obtain the target angle, wherein the target angle calculation operation comprises Pure Pursuit.
7. The device for steering the vehicle according to claim 1, wherein the processor performs a lane detection operation on the RGB image to obtain the central lane distance error, wherein the lane detection operation comprises You Only Look Once (YOLO).
8. The device for steering the vehicle according to claim 1, wherein the processor performs a compensator angle calculation operation on the current speed and the central lane distance error to obtain the compensator angle, wherein the compensator angle calculation operation comprises proportional integral derivative control (PID control).
9. The device for steering the vehicle according to claim 1, wherein the processor obtains an Around View Monitor (AVM) RGB image of the vehicle through the camera, and the processor uses the AVM RGB image to perform an accuracy evaluation operation.
10. The device for steering the vehicle according to claim 9, wherein the accuracy evaluation operation comprises an accuracy root mean square error (RMSE) evaluation operation.
11. A method for steering a vehicle, adapted to a device comprising a LiDAR, a camera, and a wheel speed sensor, and the method for steering the vehicle comprising: obtaining a point cloud data of the vehicle through the LiDAR, obtaining an RGB image of the vehicle through the camera, and obtaining a current speed of the vehicle through the wheel speed sensor; using the current speed and local path way points associated with the point cloud data to obtain a target angle; using the current speed and a central lane distance error associated with the RGB image to obtain a compensator angle; and using the target angle and the compensator angle to obtain a steering command of the vehicle, and steering the vehicle to drive in a lane according to the steering command.
12. The method for steering the vehicle according to claim 11, wherein using the current speed and the local path way points associated with the point cloud data to obtain the target angle comprises: performing pose estimation on the point cloud data, an HD map, and a start point of the vehicle to obtain a current pose of the vehicle; performing global path planning on the vector map, the current pose, and the target point of the vehicle to obtain global path way points of the vehicle; and performing local path planning on the current pose and the global path way points to obtain the local path way points of the vehicle.
13. The method for steering the vehicle according to claim 12, wherein the pose estimation comprises Normal Distributions Transform (NDT).
14. The method for steering the vehicle according to claim 12, wherein the global path planning comprises Trajectory Planning.
15. The method for steering the vehicle according to claim 12, wherein the local path planning comprises Roll-Out Generation.
16. The method for steering the vehicle according to claim 11, wherein using the current speed and the local path way points associated with the point cloud data to obtain the target angle comprises: performing a target angle calculation operation on the current speed and the local path way points to obtain the target angle, wherein the target angle calculation operation comprises Pure Pursuit.
17. The method for steering the vehicle according to claim 11, wherein using the current speed and the central lane distance error associated with the RGB image to obtain the compensator angle comprises: performing a lane detection operation on the RGB image to obtain the central lane distance error, wherein the lane detection operation comprises YOLO.
18. The method for steering the vehicle according to claim 11, wherein using the current speed and the central lane distance error associated with the RGB image to obtain the compensator angle comprises: performing a compensator angle calculation operation on the current speed and the central lane distance error to obtain the compensator angle, wherein the compensator angle calculation operation comprises proportional integral derivative control (PID control).
19. The method for steering the vehicle according to claim 11, further comprising: obtaining an Around View Monitor (AVM) RGB image of the vehicle through the camera; and using the AVM RGB image to perform an accuracy evaluation operation.
20. The method for steering the vehicle according to claim 19, wherein the accuracy evaluation operation comprises an accuracy root mean square error (RMSE) evaluation operation.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the disclosure and, together with the description, serve to explain the principles of the disclosure.
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
DESCRIPTION OF THE EMBODIMENTS
[0020]
[0021] The processor 140 may include a central processing unit (CPU), or other programmable general-purpose or special-purpose micro control unit (MCU), microprocessor, digital signal processor (DSP), programmable controller, application specific integrated circuit (ASIC), graphics processing unit (GPU), image signal processor (ISP), image processing unit (IPU), arithmetic logic unit (ALU), complex programmable logic device (CPLD), field programmable gate array (FPGA) or other similar components or a combination of the above components. The processor 140 may access and execute multiple modules and various applications stored in the storage medium 150.
[0022] The storage medium 150 may include any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory, hard disk drive (HDD), solid state drive (SSD), or similar components, or a combination of the aforementioned components, and is used for storing multiple modules or various applications that may be executed by the processor 140.
[0023]
[0024] In step S210, the processor 140 may obtain a point cloud data of the vehicle through the LiDAR 110, obtain an RGB image of the vehicle through the camera 120, and obtain a current speed of the vehicle through the wheel speed sensor 130.
[0025] In step S220, the processor 140 may obtain a target angle by using the current speed and local path way points associated with the point cloud data. In an embodiment, the storage medium 150 may store a high definition (HD) map, a vector map, a start point of the vehicle, and a target point of the vehicle. The HD map is, for example, a point cloud map of a scene. Furthermore, the HD map may be in a PCD format. On the other hand, the vector map may include defining road patterns, lane locations, intersections, and traffic signals/traffic signs. It should be noted here that step S220 in
[0026] Referring to
[0027]
[0028] In other embodiments, the above-mentioned pose estimation may also include iterative closest point (ICP), point cloud matching based on deep learning, LiDAR odometry and mapping (LOAM) and fast point feature histograms (FPFH). However, the disclosure is not limited thereto. In detail, the iterative closest point may find the best match between the target point cloud and the source point cloud by minimizing an average distance between the target point cloud and the source point cloud. On the other hand, the point cloud matching based on deep learning may learn features between the point clouds through a deep learning model to achieve more accurate matching.
[0029] Referring back to
[0030]
[0031] In other embodiments, the above-mentioned global path planning may include Mission Planner, Route Planner, A*, D*, machine learning and neural network. On the other hand, the above-mentioned local path planning may include Motion Planner, A*, Dynamic Window Approach (DWA), Rapidly-exploring Random Trees (RRT), Linear Quadratic Regulation (LQR) and Model Predictive Control (MPC), but the disclosure is not limited thereto.
[0032] Referring back to
[0033]
[0034] In other embodiments, the above-mentioned target angle calculation operation may also include model predictive control (MPC), fuzzy logic control (FLC), proportional integral derivative control (PID control) and artificial neural network (ANN). However, the disclosure is not limited thereto.
[0035] Referring back to
[0036] Referring back to
[0037]
[0038] In other embodiments, the above-mentioned lane detection operation may include edge detection and semantic segmentation, but the disclosure is not limited thereto. In detail, the edge detection may be a machine vision algorithm. For example, the processor 140 may use an edge detection algorithm (Canny algorithm) to detect lanes in the RGB image, and then use Hough Transform to convert detected edges into straight lines, thereby identifying the lanes in the RGB image. On the other hand, the semantic segmentation may divide the RGB image into different regions to implement the lane detection operation. The semantic segmentation is, for example, Mask R-CNN and U-Net.
[0039] Referring back to
[0040]
[0041] In other embodiments, the compensator angle calculation operation may include model predictive control (MPC), fuzzy logic control (FLC) and artificial neural network (ANN), but the disclosure is not limited thereto.
[0042] Referring back to
[0043]
[0044] In other embodiments, in order to evaluate the accuracy of the device 100 in steering the vehicle, the processor 140 may perform an accuracy evaluation operation offline, which is described below.
[0045]
[0047] In summary, the device and method for steering a vehicle of the disclosure may use the information obtained by the LiDAR, camera and wheel speed sensor to obtain the target angle and the compensator angle, and then obtain the steering command to steer the vehicle to drive in the lane. In this way, the vehicle may be steered and kept stable in the center of the lane, thereby reducing the risk of accidents.
[0048] It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims and their equivalents.