B60W2554/404

Point cloud-based low-height obstacle detection system

A method, apparatus, and system for determining a low-height obstacle based on outputs of a LIDAR device in an autonomous vehicle is disclosed. A point cloud comprising a plurality of points is generated based on outputs of a LIDAR device. For each point within a first number of lowest rings of points, a neighboring point in a same ring to a first direction is determined, and a first and a second coordinate values-related differences are determined. A first, a second, a third, and a fourth quantities are determined based on the first and second differences. In response to determining that the first, the second, the third, and the fourth quantities satisfy a predetermined condition, a low-height obstacle is determined based on the points within the first number of lowest rings of points. Operations of an autonomous vehicle are controlled based at least in part on the determined low-height obstacle.

Transportation vehicle and collision avoidance method

A transportation vehicle with at least one first sensor for capturing environment data, at least one second sensor for capturing transportation vehicle data, a communication module for establishing a data connection with another transportation vehicle, a driving system for automated driving of the transportation vehicle, at least one output element for a visible/audible warning signal, and a control unit. The control unit determines a predicted trajectory of the transportation vehicle, determines a predicted path of the transportation vehicle and receives a predicted trajectory and vehicle geometry data of the other transportation vehicle via the data connection, determines a predicted path of the other transportation vehicle, determines a possible collision of the transportation vehicle with the other transportation vehicle, and in response to a possible collision, outputs a warning signal by the at least one output element and/or carries out an automated driving maneuver by the driving system.

System and method for determining realistic trajectories
11520342 · 2022-12-06 · ·

A system of a first vehicle includes sensors and a processor that performs generating of an initial trajectory along a travel route of the first vehicle, acquiring trajectories of second vehicles along the route, adjusting the initial trajectory based on the acquired trajectories of the second vehicles, and navigating the first vehicle based on the adjusted initial trajectory.

DRIVING SUPPORT DEVICE, DRIVING SUPPORT METHOD, AND COMPUTER PROGRAM PRODUCT
20220379894 · 2022-12-01 ·

A driving support device includes a prediction unit, a trajectory determination unit, and a necessity determination unit. The prediction unit is configured to predict an increase degree of an inter-vehicle distance between other vehicles in response to the cut-in of the subject vehicle, and determine whether the lane change is permissible based on the increase degree. The necessity determination unit is configured to determine whether a necessity level of the lane change is within an acceptable range. The prediction unit is configured to cancel the determination whether the lane change is permissible based on the increase degree when it is determined that the necessity level is within the acceptable range, and determine whether the lane change is permissible based on a linear prediction of a behavior of the other vehicles.

VEHICLE DECELERATION PLANNING
20220379889 · 2022-12-01 · ·

Techniques for vehicle deceleration planning are discussed. The techniques include determining a first location and a first velocity of a vehicle. The techniques further include determining a second location and a second velocity of an object. Based on the first location, the second location, the first velocity, and the second velocity, a relative stopping distance between the vehicle and the object can be determined. If the relative stopping distance is less than a threshold distance, the first maximum deceleration value can be increased to a second maximum deceleration value, and the techniques determine a trajectory for the vehicle based at least in part on the second maximum deceleration value.

Vehicle and method of controlling the same
11511731 · 2022-11-29 · ·

A vehicle includes: recognizing a forward vehicle in response to the processing of image data captured by an image sensor disposed at the vehicle so as to have a field of view of the outside of the vehicle; obtaining a distance from the forward vehicle in response to the processing of detecting data captured by a radar disposed at the vehicle so as to have a detecting area of the outside of the vehicle; obtaining a change amount of vertical movement of the forward vehicle in the image data in response to the distance from the forward vehicle that is equal to or less than a reference distance; obtaining a height of an obstacle on a road surface corresponding to the change amount; obtaining the height of the obstacle on the road surface in the image data in response to the distance from the forward vehicle that exceeds the reference distance; identifying a driving speed of the vehicle; identifying a reference height corresponding to the driving speed of the vehicle; and outputting deceleration guide information in response to the height of the obstacle on the road surface that is greater than or equal to the reference height.

Vehicle and control method thereof

A vehicle includes a communication device configured to request a neighboring vehicle for first data related to autonomous driving of the neighboring vehicle and to receive the first data from the neighboring vehicle while the vehicle is driving; a sensor device configured to sense second data regarding a state of a user of the vehicle and to detect third data regarding driving information of the neighboring vehicle; and a controller configured to classify risks of the vehicle into classes according to a preset criterion based on the first data, the second data, and the third data, and to score the risks based on the classified classes.

Systems and methods for vehicle motion control with interactive object annotation

Systems and methods for vehicle motion control with interactive object annotation are provided. A method can include obtaining data indicative of a plurality of objects within a surrounding environment of the autonomous vehicle. For example, the plurality of objects can include at least at one problem object encountered by the autonomous vehicle while navigating a planned route. The method can include determining a group of objects of the plurality of objects. For example, the group of objects can include the problem object and one or more other objects in proximity to the problem object. The method can include determining a classification update to be applied to the group of objects. The method can include applying the classification update to the group of objects. The method can include providing data indicative of the classification update for the group of objects to the autonomous vehicle for use in motion planning.

Turning Assistant for a Vehicle
20230057397 · 2023-02-23 ·

A method controls a first vehicle in respect of an oncoming second vehicle. The method determines a turning situation of the first vehicle, in which an expected first trajectory of the first vehicle crosses an expected second trajectory of the second vehicle, and controls the first vehicle in such a way that, during the turning situation, a predetermined distance between the vehicles is maintained. The control includes an influencing of the direction of travel of the first vehicle.

DETERMINING OBJECT MOBILITY PARAMETERS USING AN OBJECT SEQUENCE
20230057118 · 2023-02-23 ·

A system can use semantic images, lidar images, and/or 3D bounding boxes to determine mobility parameters for objects in the semantic image. In some cases, the system can generate virtual points for an object in a semantic image and associate the virtual points with lidar points to form denser point clouds for the object. The denser point clouds can be used to estimate the mobility parameters for the object. In certain cases, the system can use semantic images, lidar images, and/or 3D bounding boxes to determine an object sequence for an object. The object sequence can indicate a location of the particular object at different times. The system can use the object sequence to estimate the mobility parameters for the object.