Patent classifications
B60W2710/20
Methods and Systems for Trajectory Planning of a Vehicle
Disclosed are aspects of systems and computer-implemented methods for trajectory planning of a vehicle. In example embodiments, a method is carried out by computer hardware components and includes determining a reference for a trajectory of the vehicle for a prediction time horizon, with the reference varying over time during the prediction time horizon. The method also includes determining a location error based on the reference for the trajectory and determining a steering command for the vehicle based on the location error.
AUTOMATED DRIVING SYSTEMS AND CONTROL LOGIC FOR LANE LOCALIZATION OF TARGET OBJECTS IN MAPPED ENVIRONMENTS
A method for controlling operation of a motor vehicle includes an electronic controller receiving, e.g., from a vehicle-mounted sensor array, sensor data with dynamics information for a target vehicle and, using the received sensor data, predicting a lane assignment for the target vehicle on a road segment proximate the host vehicle. The electronic controller also receives map data with roadway information for the road segment; the controller fuses the sensor and map data to construct a polynomial overlay for a host lane of the road segment across which travels the host vehicle. A piecewise linearized road map of the host lane is constructed and combined with the predicted lane assignment and polynomial overlay to calculate a lane assignment for the target vehicle. The controller then transmits one or more command signals to a resident vehicle system to execute one or more control operations using the target vehicle's calculated lane assignment.
EMERGENCY MOTION CONTROL FOR VEHICLE USING STEERING AND TORQUE VECTORING
A method includes identifying a desired path for an ego vehicle. The method also includes determining how to apply steering control and torque vectoring control to cause the ego vehicle to follow the desired path. The determination is based on actuator delays associated with the steering control and the torque vectoring control and one or more limits of the ego vehicle. The method further includes applying at least one of the steering control and the torque vectoring control to create lateral movement of the ego vehicle during travel. Determining how to apply the steering control and the torque vectoring control may include using a state-space model that incorporates first-order time delays associated with the steering control and the torque vectoring control and using a linear quadratic regulator to determine how to control the ego vehicle based on the state-space model and the one or more limits of the ego vehicle.
VANISHING POINT DETERMINATION, SYMMETRY-BASED BOUNDARY REFINEMENT, AND COMPONENT DETECTION FOR VEHICLE OBJECT DETECTION OR OTHER APPLICATIONS
A method includes obtaining, using at least one processing device, a vanishing point and a boundary based on image data associated with a scene, where the boundary is associated with a detected object within the scene. The method also includes repeatedly, during multiple iterations and using the at least one processing device, (i) identifying multiple patches within the boundary and (ii) determining a similarity of the image data contained within the multiple patches. The method further includes identifying, using the at least one processing device, a modification to be applied to the boundary based on the identified patches and the determined similarities. In addition, the method includes generating, using the at least one processing device, a refined boundary based on the modification, where the refined boundary identifies a specified portion of the detected object.
MULTI-FRAME IMAGE SEGMENTATION
Systems and methods for identifying objects in an environment of a host vehicle are disclosed. In one implementation, a system includes a processor configured to receive images representative of the environment of the host vehicle; assign first pixel descriptor values to a plurality of pixels associated with a first image and second pixel descriptor values to a plurality of pixels associated with a second image; identify object representations in the first image and the second image based on at the first pixel descriptor values and the second pixel descriptor values, respectively; determine a first object descriptor and a second object descriptor based on the first pixel descriptor values and the second pixel descriptor values, respectively; and based on a comparison of the first object descriptor and the second object descriptor, output an indication that the object representations in the first image and the second image represent a common object.
Electronic control device, vehicle control method, non-transitory tangible computer readable storage medium
An electronic control device provided in a vehicle that mounts an in-vehicle system is provided. The electronic control device may detect an abnormality that occurs in the in-vehicle system. The electronic control device may detect an operation related to a lane change. The electronic control device may store a travel plan for performing an autonomous driving of the vehicle. The electronic control device may acquire at least one of a subject vehicle information item on the vehicle, a surrounding information item on a surrounding environment of the vehicle, and a driver information item on a driver.
Vehicular collision avoidance system
A vehicular collision avoidance system includes a forward-viewing camera, a rearward-viewing camera, a rearward-sensing non-vision sensor and an electronic control unit. The vehicular collision avoidance system detects vehicles present forward and/or rearward of the equipped vehicle. Responsive to at least one selected from the group consisting of (i) data processing of image data captured by the rearward-viewing camera and (ii) data processing of sensor data captured by the rearward-sensing non-vision sensor, the vehicular collision avoidance system detects another vehicle approaching the equipped vehicle from the rear, determines that the other vehicle is traveling in the same traffic lane as the equipped vehicle, determines speed difference between the vehicles, and determines distance from the equipped vehicle to the other vehicle. Based on such determinations, the system determines that impact with the equipped vehicle by the other vehicle is imminent.
Perimeter sensor housings
The technology relates to an exterior sensor system for a vehicle configured to operate in an autonomous driving mode. The technology includes a close-in sensing (CIS) camera system to address blind spots around the vehicle. The CIS system is used to detect objects within a few meters of the vehicle. Based on object classification, the system is able to make real-time driving decisions. Classification is enhanced by employing cameras in conjunction with lidar sensors. The specific arrangement of multiple sensors in a single sensor housing is also important to object detection and classification. Thus, the positioning of the sensors and support components are selected to avoid occlusion and to otherwise prevent interference between the various sensor housing elements.
Object recognition apparatus, vehicle control apparatus, object recognition method, and vehicle control method
There are provided an object recognition apparatus that raises the recognition accuracy for a surrounding object and a vehicle control apparatus, and an object recognition method and a vehicle control method. An object recognition apparatus receives object data, which is a state value of the object, from a first sensor for detecting a surrounding object; compares estimation data obtained through estimation of a state value of the object, based on recognition data calculated in a past period, with the object data, and determines whether or not the object data is data in a low-resolution state; then, in accordance with the determination result, calculates the state value of the object by use of object data and estimation data and then generates the state value as recognition data, so that the recognition accuracy for an object is raised.
SYSTEM AND METHOD FOR LANE DEPARTURE WARNING WITH EGO MOTION AND VISION
An apparatus includes at least one camera configured to capture at least one image of a traffic lane, an inertial measurement unit (IMU) configured to detect motion characteristics, and at least one processor. The at least one processor is configured to obtain a vehicle motion trajectory using the IMU and based on one or more vehicle path prediction parameters, obtain a vehicle vision trajectory based on the at least one image, wherein the vehicle vision trajectory includes at least one lane boundary, determine distances between one or more points on the vehicle and one or more intersection points of the at least one lane boundary based on the obtained vehicle motion trajectory, determine at least one time to line crossing (TTLC) based on the determined distances and a speed of the vehicle, and activate a lane departure warning indicator based on the determined at least one TTLC.