B60W2420/408

Method of switching vehicle drive mode from automatic drive mode to manual drive mode depending on accuracy of detecting object

An apparatus includes a memory, and circuitry which, in operation, performs operations including, storing, in the memory, an object occurrence map defining an occurrence area where there is a possibility that an object appears, detecting the object included in a captured image of a scene seen in a running direction of a vehicle, switching a vehicle drive mode, based on a result of the detection of the object and the object occurrence map, from an automatic drive mode in which the vehicle is automatically driven to a manual drive mode in which the vehicle is driven manually by a driver, and controlling driving of the vehicle in the switched manual drive mode.

LIDAR system for autonomous vehicle

A method is presented for optimizing a scan pattern of a LIDAR system on an autonomous vehicle. The method includes receiving first SNR values based on values of a range of the target, where the first SNR values are for a respective scan rate. The method further includes receiving second SNR values based on values of the range of the target, where the second SNR values are for a respective integration time. The method further includes receiving a maximum design range of the target at each angle in the angle range. The method further includes determining, for each angle in the angle range, a maximum scan rate and a minimum integration time. The method further includes defining a scan pattern of the LIDAR system based on the maximum scan rate and the minimum integration time at each angle and operating the LIDAR system according to the scan pattern.

Autonomous Coach Vehicle Learned From Human Coach
20200387156 · 2020-12-10 ·

Systems and methods of the present disclosure include a motion planning module that iteratively determines possible trajectories for a vehicle to follow, calculates an estimated cost associated with each possible trajectory based on cost functions and cost weights, each cost function corresponding to a trajectory evaluation feature, and selects an optimal trajectory having a least associated estimated cost. When the vehicle is operated in a learning mode, a learning module determines a first actual trajectory traveled by the vehicle, compares the first actual trajectory with the optimal trajectory for that time period, and updates the cost weights based on the comparison. When the vehicle is operated in a teaching mode, a teaching module determines a second actual trajectory traveled by the vehicle, compares the second actual trajectory with the optimal trajectory selected for that time period, and generates output to a user of the vehicle based on the comparison.

TRAJECTORY GENERATION USING TEMPORAL LOGIC AND TREE SEARCH

Techniques for determining a trajectory for an autonomous vehicle are described herein. In general, determining a route can include utilizing a search algorithm such as Monte Carlo Tree Search (MCTS) to search for possible trajectories, while using temporal logic formulas, such as Linear Temporal Logic (LTL), to validate or reject the possible trajectories. Trajectories can be selected based on various costs and constraints optimized for performance. Determining a trajectory can include determining a current state of the autonomous vehicle, which can include determining static and dynamic symbols in an environment. A context of an environment can be populated with the symbols, features, predicates, and LTL formula. Rabin automata can be based on the LTL formula, and the automata can be used to evaluate various candidate trajectories. Nodes of the MCTS can be generated and actions can be explored based on machine learning implemented as, for example, a deep neural network.

Ultra Short Range Radar Sensor Systems And Methods
20200386858 · 2020-12-10 ·

A radar sensor includes: a transmitter configured to transmit radar via a transmit antenna; a receiver configured to receive signals reflected back to the radar sensor via a receive antenna; a profile module configured to generate an energy profile including a plurality of points for a plurality of distances from the radar sensor, respectively, each of the points including an energy of the signals reflected back to the radar sensor for that one of the plurality of distances; a minimums module configured to identify ones of the plurality of points having local minimums of energy; and a curve module configured to, based on the plurality of points having local minimums of energy, generate an equation representative of a curve fit to the plurality of points having local minimums of energy, the equation relating distance from the radar sensor to baseline energy of the signals reflected back to the radar sensor.

DRIVER ASSISTANCE SYSTEM AND CONTROL METHOD THEREOF
20200384988 · 2020-12-10 ·

Disclosed herein is a driver assistance system and a control method thereof. The driver assistance system includes a radar installed in a vehicle to detect other vehicle driving outside of the vehicle, and configured to acquire radar data comprising position information of the other vehicle, and a controller configured to calculate a risk of collision based on a relative distance of the other vehicle with respect to the vehicle. The controller generates a first region of interest partitioned along an expected driving path of the vehicle, generates a second region of interest during the vehicle moves along the expected driving path of the vehicle, when other vehicle is detected in the first region of interest, and calculates a relative distance of the other vehicle detected in the second region of interest.

AUTONOMOUS VEHICLE SIMULATION SYSTEM
20200384998 · 2020-12-10 ·

Techniques for analysis of autonomous vehicle operations are described. As an example, a method of autonomous vehicle operation includes storing sensor data from one or more sensors located on the autonomous vehicle into a storage medium, performing, based on at least some of the sensor data, a simulated execution of one or more programs associated with the operations of the autonomous vehicle, generating, based on the simulated execution of the one or more programs and as part of a simulation, one or more control signal values that control a simulated driving behavior of the autonomous vehicle, and providing a visual feedback of the simulated driving behavior of the autonomous vehicle on a simulated road.

Determining lane assignment based on recognized landmark location

Systems and methods are provided for determining a lane assignment for an autonomous vehicle along a road segment. In one implementation, at least one image representative of an environment of the vehicle is received from a camera. The at least one image may be analyzed to identify at least one recognized landmark, and an indicator of a lateral offset distance between the vehicle and the at least one recognized landmark may be determined. Moreover, a lane assignment of the vehicle along the road segment may be determined based on the indicator of the lateral offset distance between the vehicle and the at least one recognized landmark.

Method for generation of a safe navigation path for a vehicle and system thereof
10859389 · 2020-12-08 · ·

The present disclosure relates to generation of a safe navigation path for a vehicle. The safe navigation path is generated by a navigation path generation system. The navigation path generation system receives a reference path from a source location to a destination location. Further, generates scan lines from the reference path to the boundaries present on either side of the reference path. Further, the navigation path generation system generates segment along the boundaries. Further, using the generated segments the navigation path generation system generates a safe navigation path along the centre, between the boundaries present on either side of the reference path. Navigation of the vehicle along the safe path ensures the safety of the vehicle and the people in the vehicle especially when the vehicle is an autonomous or semi-autonomous vehicle.

FPGA device for image classification

Image processing systems can include one or more cameras configured to obtain image data, one or more memory devices configured to store a classification model that classifies image features within the image data as including or not including detected objects, and a field programmable gate array (FPGA) device coupled to the one or more cameras. The FPGA device is configured to implement one or more image processing pipelines for image transformation and object detection. The one or more image processing pipelines can generate a multi-scale image pyramid of multiple image samples having different scaling factors, identify and aggregate features within one or more of the multiple image samples having different scaling factors, access the classification model, provide the features as input to the classification model, and receive an output indicative of objects detected within the image data.