G06T2207/30256

LANE EDGE EXTRACTION METHOD AND APPARATUS, AUTONOMOUS DRIVING SYSTEM, VEHICLE, AND STORAGE MEDIUM
20220357178 · 2022-11-10 ·

This application relates to a lane edge extraction method and apparatus, an autonomous driving system, a vehicle, and a storage medium. The lane edge extraction method includes the steps of: receiving tracking edge points, about lane edges, of an immediately preceding frame of an edge image sequence; determining observation edge points, about the lane edges, of a current frame of the edge image sequence; continuing and correcting the tracking edge points of the immediately preceding frame based on the observation edge points of the current frame, to obtain temporary tracking edge points of the current frame; fitting a lane edge curve based on the temporary tracking edge points; and excluding outliers from the temporary tracking edge points based on the lane edge curve, to form tracking edge points of the current frame. The method can improve the stability and accuracy of lane edge extraction.

Bounding box estimation and lane vehicle association

Disclosed are techniques for estimating a 3D bounding box (3DBB) from a 2D bounding box (2DBB). Conventional techniques to estimate 3DBB from 2DBB rely upon classifying target vehicles within the 2DBB. When the target vehicle is misclassified, the projected bounding box from the estimated 3DBB is inaccurate. To address such issues, it is proposed to estimate the 3DBB without relying upon classifying the target vehicle.

METHOD FOR REVERSING AN ARTICULATED VEHICLE COMBINATION
20230096655 · 2023-03-30 ·

The disclosure relates to a method for reversing an articulated vehicle combination along a road curvature of a road, comprising obtaining image data representing a rearward view with respect to the articulated vehicle combination, wherein the method further comprises detecting road edges of the road in the image data; determining a plurality of longitudinal and lateral positions of the respective detected road edges; calculating road curvature of the road based on the determined lateral and longitudinal positions; calculating vehicle path curvature to stay on the road; and reversing the articulated vehicle combination by automatically steering the articulated vehicle combination to follow the calculated vehicle path curvature in order to follow the road curvature, wherein the road edges are detected by use of an image analysis algorithm which is based on a plurality of predefined types of road edges of a plurality of different road types.

THREE-DIMENSIONAL-OBJECT DETECTION DEVICE, ON-VEHICLE SYSTEM, AND THREE-DIMENSIONAL-OBJECT DETECTION METHOD

The present invention improves the accuracy of detection of a three-dimensional object. A three-dimensional object detecting device generates a mask image 90 that masks regions outside a three-dimensional object candidate region in a difference image G of a first overhead image F1 and a second overhead view image F2 for which the imaging locations O are mutually aligned, identifies a near ground contact line L1 of a three-dimensional object based on a masked difference image wherein the difference image G is masked with the mask image 90, finds an end point of the three-dimensional object based on the masked difference image Gm, identifies the width of the three-dimensional object based on a distance between a non-masking region boundary N and the end point V of the three-dimensional object in the mask image 90, identifies a far ground contact line L2 of the three-dimensional object based on the width of the three-dimensional object and the near ground contact line L1, and identifies the location of the three-dimensional object in the difference image G based on the near ground contact line L1 and the far ground contact line L2.

Method and device for determining the geographic position and orientation of a vehicle

In a method for determining the geographic position and orientation of a vehicle, an image of the vehicle's surroundings is recorded by at least one camera of the vehicle, wherein the recorded image at least partially comprises regions of the vehicle's surroundings on the ground level. Classification information is generated for the individual pixels of the recorded image and indicates an assignment to one of several given object classes, wherein based on this assignment, a semantic segmentation of the image is performed. Ground texture transitions based on the semantic segmentation of the image are detected. The detected ground texture transitions are projected onto the ground level of the vehicle's surroundings. The deviation between the ground texture transitions projected onto the ground level of the vehicle's surroundings and ground texture transitions in a global reference map is minimized. The current position and orientation of the vehicle in space is output based on the minimized deviation.

VEHICLE MONITORING METHOD AND MONITORING SYSTEM
20230102322 · 2023-03-30 ·

Provided are a vehicle monitoring method and a vehicle monitoring system. The vehicle monitoring method includes that: a polarization angle of polarized light in a sky image reflected by a vehicle window in a monitoring scenario is calculated, and a light-filtering polarization angle is calculated according to the polarization angle of the polarized light in the sky image reflected by the vehicle window, where the polarized light in the sky image is formed by scattered sunlight in a sky region corresponding to the sky image; the polarized light in the sky image reflected by the vehicle window in the monitoring scenario is filtered out according to the light-filtering polarization angle; and the monitoring scenario is imaged to form a monitoring image.

Data augmentation for detour path configuring

This application is directed to augmenting training images used for generating vehicle driving models. A computer system obtains a first image of a road, identifies within the first image a drivable area of the road, obtains an image of a traffic safety object, and determines a detour path on the drivable area. The computer system determines positions of a plurality of traffic safety objects to be placed adjacent to the detour path, and generates a second image from the first image by adaptively overlaying a respective copy of the image of the traffic safety object at each of the positions of the plurality of traffic safety objects on the drivable area within the first image. The second image is added to a corpus of training images to be used by a machine learning system to generate a model for facilitating driving a vehicle (e.g., at least partial autonomously).

METHOD TO DETECT AND OVERCOME DEGRADATION IMAGE QUALITY IMPACTS
20230098949 · 2023-03-30 ·

Method of detecting and overcoming degradation of image quality during a weather event and activation of windshield wipers. The method takes a high-contrast region of interest (ROI) of an image captured by a camera of a vehicle vision system, applies a Laplacian Operator that focuses on the pixel level in the ROI, and calculates a variance of the Laplacian. Images having an ROI below a predetermined variance threshold are classified or flagged as a low-quality image. Once a low-quality image is flagged, the camera settings can be readjusted in a feedback loop to compensate for the loss in quality of the captured image. This method may be stored as a software routine and implemented by a vision control module of the vision system.

TARGET VEHICLE RECOGNITION APPARATUS AND PROCESSING METHOD OF THE TARGET VEHICLE RECOGNITION APPARATUS

A target vehicle recognition apparatus for recognizing a target vehicle as a target to be avoided by steering control of a host vehicle detects a stopped vehicle located in front of the host vehicle, determines whether the stopped vehicle is in a forward-facing state to the host vehicle or a rearward-facing state to the host vehicle, determines whether the stopped vehicle is in a right boundary line deviation state crossing a right boundary line of a travel lane of the host vehicle, a left boundary line deviation state crossing a left boundary line of the travel lane, and recognizes the target vehicle for steering control based on a captured image captured by a front camera of the host vehicle and each of the determination results.

SYSTEM AND METHOD FOR LANE DEPARTURE WARNING WITH EGO MOTION AND VISION

An apparatus includes at least one camera configured to capture at least one image of a traffic lane, an inertial measurement unit (IMU) configured to detect motion characteristics, and at least one processor. The at least one processor is configured to obtain a vehicle motion trajectory using the IMU and based on one or more vehicle path prediction parameters, obtain a vehicle vision trajectory based on the at least one image, wherein the vehicle vision trajectory includes at least one lane boundary, determine distances between one or more points on the vehicle and one or more intersection points of the at least one lane boundary based on the obtained vehicle motion trajectory, determine at least one time to line crossing (TTLC) based on the determined distances and a speed of the vehicle, and activate a lane departure warning indicator based on the determined at least one TTLC.