G06T2207/30256

CALIBRATION APPARATUS AND CALIBRATION METHOD
20220076453 · 2022-03-10 ·

Calibration with high accuracy can be realized even when performing the calibration while running on the actual road. Specifically, the calibration apparatus is mounted in a vehicle and includes: an image acquisition unit configured to acquire captured images obtained by a camera, which is mounted in the vehicle, capturing images of surroundings of the vehicle; a feature point extraction unit configured to extract a plurality of feature points from the captured images; a tracking unit configured to track the same feature point from a plurality of the captured images captured at different times with respect to each of the plurality of feature points, which are extracted by the feature point extraction unit, and record the tracked feature point as a feature point trajectory, a lane recognition unit configured to recognize an own vehicle's lane which is a driving lane on which the vehicle is running, from the captured images; a sorting unit configured to sort out the feature point trajectory, which is in the same plane as a plane included in the own vehicle's lane recognized by the lane recognition unit, among feature point trajectories tracked and recorded by the tracking unit; and an external parameter estimation unit configured to estimate external parameters for the camera by using the feature point trajectory sorted out by the sorting unit.

Image processing device and image processing method
11270452 · 2022-03-08 · ·

An image processing device includes a marker detector configured to detect markers including white lines extending in two directions on a road surface based on an image signal from an imager that takes an image of the road surface around a vehicle, a parking frame detector configured to compute adjacent markers on the road surface among the detected markers, and to detect a parking frame defined by the adjacent markers based on a distance between the adjacent markers, and a shape estimator configured to detect extending directions of the white lines of the markers that are included in the detected parking frame, and to estimate a shape of the parking frame based on the extending directions of the detected white lines.

Surrounding vehicle display method and surrounding vehicle display device

A surrounding vehicle display device includes: a surrounding information detection device that obtains information on surroundings of a host vehicle; a virtual image generation unit that uses the information obtained by the surrounding information detection device to generate a virtual image that indicates the surroundings of the host vehicle as being viewed from above the host vehicle; and a controller that starts examination of whether to perform the auto lane change before performing the auto lane change. The controller starts the examination of whether to perform the auto lane change before performing the auto lane change and then makes a display region of the surroundings of the host vehicle on the virtual image wider than a display region before the examination is started.

CROSS-MODAL SENSOR DATA ALIGNMENT
20220076082 · 2022-03-10 ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining an alignment between cross-modal sensor data. In one aspect, a method comprises: obtaining (i) an image that characterizes a visual appearance of an environment, and (ii) a point cloud comprising a collection of data points that characterizes a three-dimensional geometry of the environment; processing each of a plurality of regions of the image using a visual embedding neural network to generate a respective embedding of each of the image regions; processing each of a plurality of regions of the point cloud using a shape embedding neural network to generate a respective embedding of each of the point cloud regions; and identifying a plurality of region pairs using the embeddings of the image regions and the embeddings of the point cloud regions.

METHOD OF RECOGNIZING MEDIAN STRIP AND PREDICTING RISK OF COLLISION THROUGH ANALYSIS OF IMAGE
20220063608 · 2022-03-03 · ·

A method of recognizing a median strip and predicting risk of a collision through analysis of an image includes acquiring an image of the road ahead including a median strip and a road bottom surface through a camera of a moving vehicle (S110), generating a Hough space by detecting an edge from the image (S120), recognizing an upper straight line of the median strip from the Hough space (S130), generating a region of interest (ROI) of the median strip using information on the upper straight line of the median strip and a lane (S140), detecting an object from an internal part of the ROI of the median strip through a labeling scheme (S150), and determining a tracking-point set of the objects that satisfy a specific condition (S160).

ROAD OBSTACLE DETECTION DEVICE, ROAD OBSTACLE DETECTION METHOD AND PROGRAM

In a road obstacle detection device, a first derivation unit derives, for each of a plurality of local regions, a probability that a local region is the road, such that the probability is higher as the ratio of a road region in the local region is higher; and a second derivation unit derives a probability that a target local region is not a previously decided normal physical body, and derives a probability that a road obstacle exists at the target local region, based on the derived probability that the target local region is not the normal physical body and a probability that a peripheral local region is the road, the peripheral local region being a local region at a periphery of the target local region, the probability that the peripheral local region is the road being derived by the first derivation unit.

OBJECT DETECTION

A computing device is programmed to generate a plurality of raw 3D point clouds from respective sensors having non-overlapping fields of view, scale each of the raw point clouds including scaling real-world dimensions of one or more features included in the respective raw 3D point cloud, determine a first transformation matrix that transforms a first coordinate system of a first scaled 3D point cloud of a first sensor to a second coordinate system of a second scaled 3D point cloud of a second sensor, and determine a second transformation matrix that transforms a third coordinate system of a third scaled 3D point cloud of a third sensor to the second coordinate system of the second scaled 3D point cloud of the second sensor. The computing device is programmed, based on the first and second transformation matrices, upon detecting an object in a first or third camera field of view, determine location coordinates of the object relative to a coordinate system that is defined based on the second coordinate system; and output the determined location coordinates of the object.

CAMERA CALIBRATION APPARATUS AND OPERATING METHOD
20220067973 · 2022-03-03 ·

A camera calibration includes; a camera configured to acquire a first forward image from a first viewpoint and a second forward image from a second viewpoint; an event trigger module configured to determine whether to perform camera calibration; a motion estimation module configured to acquire information related to motion of a host vehicle; a three-dimensional reconstruction module configured to acquire three-dimensional coordinate values based on the first forward image and the second forward image; and a parameter estimation module configured to estimate an external parameter of the camera based on the three-dimensional coordinate values.

METHODS AND APPARATUS FOR AUTOMATIC COLLECTION OF UNDER-REPRESENTED DATA FOR IMPROVING A TRAINING OF A MACHINE LEARNING MODEL
20230394849 · 2023-12-07 ·

In some embodiments, a method can include executing a first machine learning model to detect at least one lane in each image from a first set of images. The method can further include determining an estimate location of a vehicle for each image, based on localization data captured using at least one localization sensor disposed at the vehicle. The method can further include selecting lane geometry data for each image, from a map and based on the estimate location of the vehicle. The method can further include executing a localization model to generate a set of offset values for the first set of images based on the lane geometry data and the at least one lane in each image. The method can further include selecting a second set of images from the first set of images based on the set of offset values and a previously-determined offset threshold.

PARKING SPACE DETECTION METHOD AND DEVICE,VEHICLE, AND STORAGE MEDIUM
20230394848 · 2023-12-07 ·

The disclosure provides a parking space detection method and device, a vehicle, and a storage medium. The method includes: separately inputting an obtained current frame image into a pre-trained parking space detection model, a pre-trained obstacle detection model, and a pre-trained scenario detection model, to obtain a parking space prediction result, an obstacle prediction result, and a scenario prediction result; determining, based on a detected positional relationship between any target parking space and a vehicle-mounted camera, whether the target parking space is a parking space where the vehicle-mounted camera is located; performing, if yes, verification on a parking space prediction result of the target parking space by using an obstacle prediction result and a scenario prediction result, to obtain a single-frame prediction result of the target parking space; and performing, if no, verification on a parking space prediction result of the target parking space by using a scenario prediction result, to obtain a single-frame prediction result of the target parking space. In this way, after the verification based on the plurality of verification mechanisms, a highly precise parking space detection result is given in a complex scenario, and a precise prediction result is given while the vehicle does not need to pass the target parking space completely, which improves a parking space release rate.