G06T2207/30256

SYSTEM AND METHOD FOR SIMULTANEOUS ONLINE LIDAR INTENSITY CALIBRATION AND ROAD MARKING CHANGE DETECTION
20230126833 · 2023-04-27 · ·

A system, method, and computer program for updating calibration lookup tables within an autonomous vehicle or transmitting roadway marking changes between online and offline mapping files is disclosed. A LIDAR sensor may be used for generating an online (rasterized) mapping file with online intensity values which are compared against a correlated offline (rasterized) mapping file having offline intensity values. The online intensity value may be used to acquire a lookup table having a normal distribution that is compared against the offline intensity value. The lookup table may be updated when the offline intensity value is within the normal distribution. Or the vehicle may transmit a roadway marking change when the offline intensity value is outside the normal distribution.

SYSTEM AND METHOD FOR APPROACHING VEHICLE DETECTION

An identification apparatus for an equipped vehicle includes a camera configured to capture image data in a field of view directed to an exterior region proximate to the equipped vehicle and a controller in communication with the camera. The controller identifies an object in the image data and identifies a proportion of the object in response to pixels representing the object in the image data. The controller then communicates a notification indicating a trailing vehicle in response to the object approaching the camera in excess of a threshold.

OBSTACLE DETECTION APPARATUS, OBSTACLE DETECTION METHOD, AND OBSTACLE DETECTION PROGRAM

In an obstacle detection apparatus, the captured image of a vicinity of a vehicle from an imaging apparatus is acquired. A three-dimensional estimation image showing a three-dimensional position of a feature point in the captured image is generated, and a three-dimensional position of an object is estimated. An attribute image in which an object in the captured image is classified into one or more classes that include at least a road-surface class is generated. The three-dimensional estimation image and the attribute image are fused, the feature points and the classes are associated, and road-surface points associated with the road-surface class are extracted. Based on the road-surface points, a road-surface height in the vicinity of the vehicle is estimated. Based on the estimated road-surface height, the three-dimensional position of the object is corrected. Based on the three-dimensional position of the object, an obstacle in the vicinity of the vehicle is detected.

RUT DETECTION FOR ROAD INFRASTRUCTURE

A computer-implemented method for rut detection is provided. The method includes detecting, by a rut detection system, areas in a road-scene image that include ruts with pixel-wise probability values, wherein a higher value indicates a better chance of being a rut. The method further includes performing at least one of rut repair and vehicle rut avoidance responsive to the pixel-wise probability values. The detecting step includes performing neural network-based, pixel-wise semantic segmentation with context information on the road-scene image to distinguish rut pixels from non-rut pixels on a road depicted in the road-scene image.

INTELLIGENT RESCUE METHOD, RESCUE DEVICE, AND VEHICLE
20230130609 · 2023-04-27 ·

An intelligent rescue method applied to a vehicle-mounted device and an airborne rescue device to enable semi-automatic warning and rescue of a broken-down, crashed, drowned, or stranded vehicle, enables communication between the vehicle-mounted device and the rescue device. The vehicle-mounted device determines by sensors a type of emergency of a vehicle, and performs a first assistance action and sends the rescue device a distress signal corresponding to the type of the emergency of the vehicle. The rescue device receives the distress signal and takes off from an initial position of the vehicle to a target position in response to the distress signal. Once the rescue device reaches the target position, the rescue device performs a second action for assistance.

Crowd sourcing data for autonomous vehicle navigation

Systems and methods of processing crowdsourced navigation information for use in autonomous vehicle navigation are disclosed. A method may include processing, by a mapping server, crowdsourced navigation information from a plurality of vehicles obtained by sensors coupled to the plurality of vehicles, wherein the navigation information describes road lanes of a road segment; collecting data about landmarks identified proximate to the road segment, the landmarking including a traffic sign; generating, by the mapping server, an autonomous vehicle map for the road segment, wherein the autonomous vehicle map includes a spline corresponding to a lane in the road segment and the landmarks identified proximate to the road segment; and distributing, by the mapping server, the autonomous vehicle map to an autonomous vehicle for use in autonomous navigation over the road segment.

Traffic cones and traffic cone lanterns placement and collection system and a method

The present disclosure provides a traffic cones and traffic cone lanterns placement and collection system and a method. The system comprises: a vehicle body, on which a loading bay and a storage bay are disposed; an on-vehicle first robot arm, which is used for moving a traffic cone of the loading bay and a traffic cone lantern thereon off the vehicle body, or collect them from outside of the vehicle to the loading bay; an on-vehicle second robot arm, which is used for moving a traffic cone and a traffic cone lantern to and from the loading bay for storage management; at least one object recognition sensor, which is used for capturing the information of a road traffic marking and the information of the objects on the road; and a processing unit, which is used for working out the position of the road traffic marking and the position information of the objects on the road, and controlling the robot arms' motion accordingly to move the traffic cone of the loading bay and the traffic cone lantern thereon outside of the vehicle body, or collect them from outside of the vehicle. The disclosure enables both traffic cones and traffic cone lanterns automatic placement work or automatic collection work.

Lane recognition for automotive vehicles
11600081 · 2023-03-07 · ·

The present invention relates to a lighting system 200 of an automotive vehicle comprising: —an image capture device (205) configured to acquire an image (I) of a road (R) of travel of the vehicle, said road (R) comprising lanes marked on the road (110); —a lighting module (215) configured to project road markings on the road (120); wherein said lighting system (200) is configured to filter the projected road markings (120) on the road compared to the lanes marked (110) on the road.

Method of recognizing median strip and predicting risk of collision through analysis of image
11634124 · 2023-04-25 · ·

A method of recognizing a median strip and predicting risk of a collision through analysis of an image includes acquiring an image of the road ahead including a median strip and a road bottom surface through a camera of a moving vehicle (S110), generating a Hough space by detecting an edge from the image (S120), recognizing an upper straight line of the median strip from the Hough space (S130), generating a region of interest (ROI) of the median strip using information on the upper straight line of the median strip and a lane (S140), detecting an object from an internal part of the ROI of the median strip through a labeling scheme (S150), and determining a tracking-point set of the objects that satisfy a specific condition (S160).

Artificial intelligence using convolutional neural network with Hough transform

Artificial intelligence using convolutional neural network with Hough Transform. In an embodiment, a convolutional neural network (CNN) comprises convolution layers, a Hough Transform (HT) layer, and a Transposed Hough Transform (THT) layer, arranged such that at least one convolution layer precedes the HT layer, at least one convolution layer is between the HT and THT layers, and at least one convolution layer follows the THT layer. The HT layer converts its input from a first space into a second space, and the THT layer converts its input from the second space into the first space. The CNN may be applied to an input image to perform semantic image segmentation, so as to produce an output image representing a result of the semantic image segmentation.