Patent classifications
B60W2556/25
USING PREDICTIVE VISUAL ANCHORS TO CONTROL AN AUTONOMOUS VEHICLE
Using predictive visual anchors to control an autonomous vehicle, including: determining, based on a plurality of frames of video data from a camera of an autonomous vehicle, one or more predicted visual anchors, wherein the one or more predicted visual anchors comprise a predicted location of one or more visual anchors at a future time relative to when the plurality of frames were captured; identifying, in another frame of video data corresponding to the future time, the one or more visual anchors; determining one or more differentials between the one or more visual anchors and the one or more predicted visual anchors; determining, based on the one or more differentials, one or more control operations for the autonomous vehicle; and applying the one or more control operations.
Safety and comfort constraints for navigation
A navigational system for a host vehicle may comprise at least one processing device. The processing device may be programmed to receive a first output and a second output associated with the host vehicle; identify a representation of a target object in the first output; and determine whether a characteristic of the target object triggers a navigational constraint. If the navigational constraint is not triggered, the processing device may verify the identification of the representation of the target object based on a combination of the first output and the second output. If the navigational constraint is triggered, the processing device may verify the identification of the representation of the target object based on the first output; and in response to the verification, cause at least one navigational change to the host vehicle.
Methods and systems for sun-aware vehicle routing
Example implementations may relate to sun-aware vehicle routing. In particular, a computing system of a vehicle may determine an expected position of the sun relative to a geographic area. Based on the expected position, the computing system may make a determination that travel of the vehicle through certain location(s) within the geographic area is expected to result in the sun being proximate to an object within a field of view of the vehicle's image capture device. Responsively, the computing system may generate a route for the vehicle in the geographic area based at least on the route avoiding travel of the vehicle through these certain location(s), and may then operate the vehicle to travel in accordance with the generated route. Ultimately, this may help reduce or prevent situations where quality of image(s) degrades due to sunlight, which may allow for use of these image(s) as basis for operating the vehicle.
VEHICLE CONTROL SYSTEM, AND VEHICLE CONTROL METHOD
To provide a vehicle control system that is capable of planning a trajectory that can ensure more visibility and enables safe traveling when an invisible range of a sensor exists.
A vehicle control system that plans a target trajectory of a vehicle based on recognition information from an external environment sensor, the vehicle control system including a recognizing unit that recognizes an object at a periphery of the vehicle based on the recognition information; and a trajectory planning unit that plans the target trajectory such that an actual detection range of the external environment sensor becomes wide when the recognizing unit recognizes the object.
Sensor calibration using dense depth maps
This disclosure is directed to calibrating sensors mounted on an autonomous vehicle. A dense depth map can be generated in a two-dimensional camera space using point cloud data generated by one of the sensors. Image data from another of the sensors can be compared to the dense depth map in the two-dimensional camera space. Differences determined by the comparison can indicate alignment errors between the sensors. Calibration data associated with the errors can be determined and used to calibrate the sensors without the need for calibration infrastructure.
Safety and comfort constraints for navigation
A navigational system for a host vehicle may comprise at least one processing device. The processing device may be programmed to receive a first output and a second output associated with the host vehicle and identify a representation of a target object in the first output. The processing device may determine whether a characteristic of the target object triggers a navigational constraint by verifying the identification of the target object based on the first output and, if the at least one navigational constraint is not verified based on the first output, then verifying the identification of the target object based on a combination of the first output and the second output. In response to the verification, the processing device may cause at least one navigational change to the host vehicle.
Collision monitoring using statistic models
Techniques and methods for performing collision monitoring using error models. For instance, a vehicle may generate sensor data using one or more sensors. The vehicle may then analyze the sensor data using systems in order to determine parameters associated with the vehicle and parameters associated with another object. Additionally, the vehicle may process the parameters associated with the vehicle using error models associated with the systems in order to determine a distribution of estimated locations associated with the vehicle. The vehicle may also process the parameters associated with the object using the error models in order to determine a distribution of estimated locations associated with the object. Using the distributions of estimated locations, the vehicle may determine the probability of collision between the vehicle and the object.
Calculating velocity of an autonomous vehicle using radar technology
Examples relating to vehicle velocity calculation using radar technology are described. An example method performed by a computing system may involve, while a vehicle is moving on a road, receiving, from two or more radar sensors mounted at different locations on the vehicle, radar data representative of an environment of the vehicle. The method may involve, based on the data, detecting at least one scatterer in the environment. The method may involve making a determination of a likelihood that the at least one scatterer is stationary with respect to the vehicle. The method may involve, based on the determination being that the likelihood is at least equal to a predefined confidence threshold, calculating a velocity of the vehicle based on the data from the sensors. The calculated velocity may include an angular and linear velocity. Further, the method may involve controlling the vehicle based on the calculated velocity.
MOBILE OBJECT CONTROL DEVICE, MOBILE OBJECT CONTROL METHOD, AND STORAGE MEDIUM
According to an embodiment, a mobile object control device includes a recognizer configured to recognize a surrounding situation of a mobile object on the basis of an output of an external sensor and a marking recognizer configured to recognize markings for dividing an area through which the mobile object passes on the basis of the surrounding situation recognized by the recognizer. The marking recognizer extracts a prescribed area from the surrounding situation when it is determined that marking recognition accuracy has been lowered, extracts an edge within the extracted prescribed area, and recognizes the markings on the basis of an extraction result.
Automatic robotically steered camera for targeted high performance perception and vehicle control
Disclosed are methods, systems, and non-transitory computer readable media that control an autonomous vehicle via at least two sensors. One aspect includes capturing an image of a scene ahead of the vehicle with a first sensor, identifying an object in the scene at a confidence level based on the image, determining the confidence level of the identifying is below a threshold, in response to the confidence level being below the threshold, directing a second sensor having a field of view smaller than the first sensor to generate a second image including a location of the identified object, further identifying the object in the scene based on the second image, controlling the vehicle based on the further identification of the object.