Patent classifications
G06V20/588
Method and Apparatus for Detecting Complexity of Traveling Scenario of Vehicle
This application discloses a method and an apparatus for detecting a complexity of a traveling scenario of a vehicle, comprising: obtaining a travelling speed of the vehicle and a travelling speed of a target vehicle; determining, based on the traveling speed of the vehicle and the traveling speed of the target vehicle, a dynamic complexity of a traveling scenario in which the vehicle is located; determining static information of each static factor in the traveling scenario in which the vehicle is currently located; obtaining, based on the static information of each static factor, a static complexity of the traveling scenario in which the vehicle is located; and obtaining, based on the dynamic complexity and the static complexity, a comprehensive complexity of the traveling scenario in which the vehicle is located.
PROCESSING DEVICE
Erroneous detection due to erroneous parallax measurement is suppressed to accurately detect a step present on a road. An in-vehicle environment recognition device 1 includes a processing device that processes a pair of images acquired by a stereo camera unit 100 mounted on a vehicle. The processing device includes a stereo matching unit 200 that measures a parallax of the pair of images and generates a parallax image, a step candidate extraction unit 300 that extracts a step candidate of a road on which the vehicle travels from the parallax image generated by the stereo matching unit 200, a line segment candidate extraction unit 400 that extracts a line segment candidate from the images acquired by the stereo camera unit 100, an analysis unit 500 that performs collation between the step candidate extracted by the step candidate extraction unit 300 and the line segment candidate extracted by the line segment candidate extraction unit 400 and analyzes validity of the step candidate based on the collation result and an inclination of the line segment candidate, and a three-dimensional object detection unit 600 that detects a step present on the road based on the analysis result of the analysis unit 500.
Methods and Systems for Predicting Properties of a Plurality of Objects in a Vicinity of a Vehicle
A computer-implemented method for predicting properties of a plurality of objects in a vicinity of a vehicle includes multiple steps that can be carried out by computer hardware components. The method includes determining a grid map representation of road-users perception data, with the road-users perception data including tracked perception results and/or untracked sensor intermediate detections. The method also includes determining a grid map representation of static environment data based on data obtained from a perception system and/or a pre-determined map. The method further includes determining the properties of the plurality of objects based on the grid map representation of road-users perception data and the grid map representation of static environment data.
SEMANTIC ANNOTATION OF SENSOR DATA USING UNRELIABLE MAP ANNOTATION INPUTS
Provided are methods for semantic annotation of sensor data using unreliable map annotation inputs, which can include training a machine learning model to accept inputs including images representing sensor data for a geographic area and unreliable semantic annotations for the geographic area. The machine learning model can be trained against validated semantic annotations for the geographic area, such that subsequent to training, additional images representing sensor data and additional unreliable semantic annotations can be passed through the neural network to provide predicted semantic annotations for the additional images. Systems and computer program products are also provided.
METHOD AND APPARATUS FOR PROCESSING IMAGE
The present disclosure provides a method and apparatus for processing an image. A specific implementation includes: acquiring a top view of a road; identifying a position of a lane line from the top view; cutting the top view into at least two areas, and determining, according to the position of the lane line in each area, a width of a lane in the each area and an average width of the lane in the top view; calculating a first perspective correction matrix by optimizing a first loss function, the first loss function being used to represent a difference between the width of the lane in the each area and the average width of the lane in the top view; and performing a lateral correction on the top view through the first perspective correction matrix to obtain a first corrected image.
SYSTEMS AND METHODS FOR DETERMINING ROAD TRAVERSABILITY USING REAL TIME DATA AND A TRAINED MODEL
Embodiments of the disclosed systems and methods provide for determination of roadway traversability by an autonomous vehicle using real time data and a trained traversability determination machine learning model. Consistent with aspects of the disclosed embodiments, the model may be trained using annotated birds eye view perspective data obtained using vehicle vision sensor systems (e.g., LiDAR and/or camera systems). During operation of a vehicle, vision sensor data may be used to construct birds eye view perspective data, which may be provided to the trained model. The model may label and/or otherwise annotate the vision sensor data based on relationships identified in the model training process to identify associated road boundary and/or lane information. Local vehicle control systems may compute control actions and issue commands to associated vehicle control systems to ensure the vehicle travels within a desired path.
System Adapted to Detect Road Condition in a Vehicle and a Method Thereof
A system adapted to detect road condition in a vehicle and a method thereof uses geometrical laser projections and an image processing system. The system includes a laser source, an imaging unit and at least a processing unit. The laser source is adapted to project geometrical laser projections on the road. The imaging unit is adapted to capture images of the geometrical projections. The processing unit is configured to calculate a surface reflectance for the projected geometrical projection. Further it is configured to compute geometrical parameters of the projections at regular time intervals based on the captured images. It determines a road condition based on the surface reflectance and the geometrical parameters.
METHOD FOR PREDICTING AN EGO-LANE FOR A VEHICLE
A method for predicting an ego-lane for a vehicle. The method includes: receiving at least one image captured by at last one camera sensor of the vehicle, which depicts a lane that may be used by a vehicle; ascertaining a center line of the lane, which extends through a center of the lane, by implementing a trained neural network on the captured image, the neural network being trained via regression to ascertain a center line of a lane, which extends in a center of the lane, based on captured images of the lane; outputting a plurality of parameters, which describe the center line of the lane, via the neural network; generating the center line based on the parameters of the center line; identifying the center line of the lane as the ego-lane of the vehicle; and providing the ego-lane.
METHOD AND CONTROL UNIT FOR OPERATING A TRANSVERSE STABILIZATION SYSTEM OF A VEHICLE
A method for operating a transverse stabilization system of a vehicle. A steering direction of the vehicle and a setpoint direction of the vehicle are read in, with a transverse stabilization target for the transverse stabilization system being determined using the steering direction and the setpoint direction.
CONTEXT BASED LANE PREDICTION
A method for context based lane prediction, the method may include obtaining sensed information regarding an environment of the vehicle; providing the sensed information to a second trained machine learning process; and locating one or more lane boundaries by the second trained machine learning process. The second trained machine learning process is generated by: performing a self-supervised training process, using a first dataset, of a first machine learning process to provide a first trained machine learning process; wherein the first trained machine learning process comprises a first encoder portion and a first decoder portion; replacing the first decoder portion by a second decoder portion to provide a second machine learning process; and performing an additional training process, using a second dataset that is associated with lane boundary metadata, of the second machine learning process to provide a second trained machine learning process.