Patent classifications
B60W2556/35
SENSOR AIMING DEVICE, DRIVING CONTROL SYSTEM, AND CORRECTION AMOUNT ESTIMATION METHOD
A sensor aiming device includes: a target positional relationship processing unit for outputting positional relationship information of first and second targets; a sensor observation information processing unit configured to convert the observation result of the first and second targets into a predetermined unified coordinate system according to a coordinate conversion parameter, perform time synchronization at a predetermined timing, and extract first target information indicating a position of the first target and second target information indicating a position of the second target; a position estimation unit configured to estimate a position of the second target using the first target information, the second target information, and the positional relationship information; and a sensor correction amount estimation unit configured to calculate a deviation amount of the second sensor using the second target information and an estimated position of the second target and estimate a correction amount.
SYSTEMS AND METHODS FOR AUTONOMOUS FIRST RESPONSE ROUTING
A device may receive emergency data, traffic data, network performance data, crime data, and gunshot data associated with a geographical area and may identify a location within the geographical area based on the emergency data, the traffic data, the network performance data, the crime data, and the gunshot data. The device may determine, based on the emergency data, the traffic data, the network performance data, the crime data, and the gunshot data for the location, a risk level for the location and may identify an autonomous vehicle based on the risk level, the traffic data, and the network performance data for the location. The device may determine a route for the autonomous vehicle to the location based on the traffic data and the network performance data for the location, and may perform actions based on the autonomous vehicle and the route.
Sensor fusion for precipitation detection and control of vehicles
An apparatus includes a processor configured to be disposed with a vehicle and a memory coupled to the processor. The memory stores instructions to cause the processor to receive, at least two of: radar data, camera data, lidar data, or sonar data. The sensor data is associated with a predefined region of a vicinity of the vehicle while the vehicle is traveling during a first time period. At least a portion of the vehicle is positioned within the predefined region during the first time period. The method also includes detecting that no other vehicle is present within the predefined region. An environment of the vehicle during the first time period is classified as one state from a set of states that includes at least one of dry, light rain, heavy rain, light snow, or heavy snow, based on at least two of the sensor data to produce an environment classification. An operational parameter of the vehicle based on the environment classification is modified.
Generating a Fused Object Bounding Box Based on Uncertainty
This document describes techniques and systems for generating a fused object bounding box based on uncertainty. At least two bounding boxes, each associated with a different sensor, is generated. A fused center point and yaw angle as well as length, width, and velocity can be found by mixing the distributions of the parameters from each bounding box. A discrepancy between the center points of each bounding box can be used to determine whether to refine the fused bounding box (e.g., find an intersection between at least two bounding boxes) or consolidate the fused bounding box (e.g., find a union between at least two bounding boxes). This results in the fused bounding box having a confidence level of the uncertainty associated with the fused bounding box. In this manner, better estimations of the uncertainty of the fused bounding box may be achieved to improve tracking performance of a sensor fusion system.
ZONE CONTROL UNIT FOR A VEHICLE
A vehicle includes a plurality of zone control units that each comprise an inertial measurement unit, and wherein each zone control unit is configured to provide inertial measurement data obtained from its respective inertial measurement unit to other vehicle components via a vehicle bus.
SENSOR INFORMATION FUSION METHOD AND DEVICE, AND RECORDING MEDIUM RECORDING PROGRAM FOR EXECUTING THE METHOD
A sensor information fusion method of an embodiment includes obtaining N sensor tracks from each of a plurality of sensors with respect to a target located around a vehicle, calculating association costs of the N sensor tracks with respect to M reference tracks, and storing the association costs in a matrix form, and calculating an arrangement of reference tracks and sensor tracks that minimize the association costs with respect to the matrix, and outputting a sensing information result with respect to the target according to the arrangement of the reference tracks and the sensor tracks calculated by the plurality of sensors.
SENSOR RECOGNITION INTEGRATION DEVICE
Provided is a sensor recognition integration device capable of reducing the load of integration processing so as to satisfy the minimum necessary accuracy required for vehicle travel control, and capable of improving processing performance of an ECU and suppressing an increase in cost. A sensor recognition integration device B006 that integrates a plurality of pieces of object information related to an object around an own vehicle detected by a plurality of external recognition sensors includes: a prediction update unit 100 that generates predicted object information obtained by predicting an action of the object; an association unit 101 that calculates a relationship between the predicted object information and the plurality of pieces of object information; an integration processing mode determination unit 102 that switches an integration processing mode for determining a method of integrating the plurality of pieces of object information on the basis of a positional relationship between a specific region (for example, a boundary portion) in an overlapping region of detection regions of the plurality of external recognition sensors and the predicted object information; and an integration target information generation unit 104 that integrates the plurality of pieces of object information associated with the predicted object information on the basis of the integration processing mode.
METHOD AND DEVICE FOR PREDICTING A FUTURE ACTION OF AN OBJECT FOR A DRIVING ASSISTANCE SYSTEM FOR VEHICLE DRIVABLE IN HIGHLY AUTOMATED FASHION
A method for predicting a future action of an object for a driving assistance system for a highly automated mobile vehicle. At least one sensor signal from at least one vehicle sensor of the vehicle is read in, the sensor signal representing at least one piece of kinematic object information concerning the object that is detected by the vehicle sensor at an instantaneous point in time. A planner signal from a planner of the autonomous driving assistance system is read in, the planner signal representing at least one piece of semantic information concerning the object or the surroundings of the object at a point in time in the past. The kinematic object information is fused with the semantic information to obtain a fusion signal. A prediction signal is determined using the fusion signal, the prediction signal representing the future action of the object.
VEHICULAR DRIVING ASSISTANCE SYSTEM WITH ENHANCED TRAFFIC LANE DETERMINATION
A vehicular driver assistance system includes a front camera module (FCM) disposed at a vehicle. The system, responsive to processing captured image data, generates FCM lane information including information regarding a traffic lane the vehicle is currently traveling along. An e-Horizon module (EHM) generates EHM lane information including information regarding the traffic lane the vehicle is currently traveling along. The vehicular driver assistance system determines an FCM correlation using the FCM lane information and sensor data captured by at least one exterior sensor. The vehicular driver assistance system determines an EHM correlation using the EHM lane information and the sensor data captured by the at least one exterior sensor. Responsive to determining the FCM correlation and the EHM correlation, the system controls lateral movement of the vehicle based on one selected from the group consisting of (i) the FCM lane information and (ii) the EHM lane information.
METHOD FOR DETERMINING ATTRIBUTE VALUE OF OBSTACLE IN VEHICLE INFRASTRUCTURE COOPERATION, DEVICE AND AUTONOMOUS DRIVING VEHICLE
The present disclosure provides a method and apparatus for determining an attribute value of an obstacle in vehicle infrastructure cooperation. The method includes: acquiring vehicle-end data collected by at least one sensor of an autonomous driving vehicle; acquiring vehicle wireless communication V2X data transmitted by a roadside device; and fusing, in response to determining that an obstacle is at an edge of a blind spot of the autonomous driving vehicle, the vehicle-end data and the V2X data to obtain an attribute estimated value of the obstacle.