Patent classifications
G06T2207/30256
Method and device for checking a calibration of environment sensors
A device, transportation vehicle, and method for checking a calibration of surroundings sensors, wherein the surroundings sensors at least partially detect similar surroundings and provide mutually time-synchronized sensor data, periodic features at least for at least one distinguished area are detected in the sensor data of the surroundings sensors belonging to the same surroundings, a transformation of the sensor data corresponding to the at least one distinguished area to a frequency domain is carried out at least for the at least one distinguished area, a frequency and/or a phase angle of the periodic features is determined in the sensor data transformed to the frequency domain, a decalibration of the surroundings sensors is detected based on a comparison of the determined frequencies and/or of the determined phase angles, and a result of the check is provided.
Apparatus for recognizing parking area for autonomous parking and method thereof
A vehicle parking assistance device includes an image sensing device, an artificial intelligence learning device, and a controller connected with the image sensing device and the artificial intelligence learning device. The controller is configured to obtain an image using the image sensing device, detect at least one parking line pair in the obtained image, detect a parking slot based on deep learning, detect a parking area based on the detected parking slot and the at least one detected parking pair, detect an entrance point for the parking area, and generate parking information based on the parking area and the entrance point.
VEHICLE POSITION INFORMATION ACQUISITION DEVICE, VEHICLE POSITION INFORMATION ACQUISITION SYSTEM, AND VEHICLE POSITION INFORMATION ACQUISITION METHOD
A vehicle position information acquisition device includes: a position information detector that detects position information of a vehicle; a map data storage that stores a feature type of a feature on a map and a position data group in association with each other; an imaging unit that images a region ahead of the vehicle; a feature detector that detects a surrounding feature; a reference position data group extractor that extracts a reference position data group; a reference position data group storage; a comparative position data group extractor that extracts a comparative position data group; a comparative position data group storage; an error processor that associates reference position data with comparative position data having a shortest distance and calculates an error in a distance between the reference position data and the comparative position data that are associated with each other; and a correction unit that corrects the position information.
Lane mapping and localization using periodically-updated anchor frames
A hybrid approach for using reference frames is presented in which a series of anchor frames is used, effectively resetting a global frame upon a trigger event. With each new anchor frame, parameter values for lane boundary estimates (known as lane boundary states) can be recalculated with respect to the new anchor frame. Triggering events may a based on a length of time, distance traveled, and/or an uncertainty value.
Calibration apparatus and calibration method
Calibration with high accuracy can be realized even when performing the calibration while running on the actual road. Specifically, the calibration apparatus is mounted in a vehicle and includes: an image acquisition unit configured to acquire captured images obtained by a camera, which is mounted in the vehicle, capturing images of surroundings of the vehicle; a feature point extraction unit configured to extract a plurality of feature points from the captured images; a tracking unit configured to track the same feature point from a plurality of the captured images captured at different times with respect to each of the plurality of feature points, which are extracted by the feature point extraction unit, and record the tracked feature point as a feature point trajectory; a lane recognition unit configured to recognize an own vehicle's lane which is a driving lane on which the vehicle is running, from the captured images; a sorting unit configured to sort out the feature point trajectory, which is in the same plane as a plane included in the own vehicle's lane recognized by the lane recognition unit, among feature point trajectories tracked and recorded by the tracking unit; and an external parameter estimation unit configured to estimate external parameters for the camera by using the feature point trajectory sorted out by the sorting unit.
Methods and apparatus for depth estimation on a non-flat road with stereo-assisted monocular camera in a vehicle
A non-transitory processor-readable medium stores code representing instructions to be executed by the processor. The code comprises code to cause the processor to receive a first image and a second image from a stereo camera pair disposed with a vehicle. The code causes the processor to detect, using a machine learning model, an object based on the first image, the object located within a pre-defined area within a vicinity of the vehicle. The code causes the processor to determine a distance between the object and the vehicle based on disparity between the first image and the second image. The code causes the processor to determine a longitudinal value of the vehicle based on the distance and a height of the vehicle. The code causes the processor to send an instruction to facilitate driving of the vehicle based on a road profile associated with the longitudinal value.
METHOD FOR UPDATING ROAD SIGNS AND MARKINGS ON BASIS OF MONOCULAR IMAGES
The present invention discloses a method for updating road signs and markings on the basis of monocular images, comprising the following steps: acquiring street images of urban roads and GPS phase center coordinates and spatial attitude data corresponding to the street images; extracting coordinates of the road sign marking images; constructing a sparse three-dimensional model, and then generating a streetscape image depth map; calculating the space position of the road sign and marking according to the semantic and depth values of the image, the collinear equation and the space distance relation; if the same road sign and marking is visible in multiple views, solving the position information of the road sign; and vectorizing the obtained road sign position information, and fusing the information into the original data to realize the updating of the road sign data.
BOUNDING BOX ESTIMATION AND LANE VEHICLE ASSOCIATION
Disclosed are techniques for estimating a 3D bounding box (3DBB) from a 2D bounding box (2DBB). Conventional techniques to estimate 3DBB from 2DBB rely upon classifying target vehicles within the 2DBB. When the target vehicle is misclassified, the projected bounding box from the estimated 3DBB is inaccurate. To address such issues, it is proposed to estimate the 3DBB without relying upon classifying the target vehicle.
ROAD DETERIORATION DIAGNOSING DEVICE, ROAD DETERIORATION DIAGNOSING METHOD, AND RECORDING MEDIUM
A road deterioration diagnosing device acquires an image capturing a road and a date and a time when the image was captured as well as a location and a direction in which the image was captured, detects deterioration of the road surface shown in the acquired image, calculates a direction in which a shadow of a building may be formed over the road surface shown in the image using the date, the time, the location, and the direction, and determines a possibility of erroneous detection of the deterioration on the basis of the direction in which the road surface deteriorates and the direction in which the shadow may be formed.
Processing of Sensor Data for a Driver Assistance System
In order to process sensor data for a driver assistance system oriented towards the driver's comfort, sensor data that is sensed by a sensor device and describes objects is preprocessed such that a distinction is made between a driving zone and a non-driving zone, where the driving zone is designated as an object driving zone. The object driving zone is delimited by a boundary line. Since the sensor data is processed for a comfort-oriented driver assistance system, it does not have to describe the entire theoretical driving zone. Rather, the boundary line is used to delimit the driving zone within which the vehicle can normally be expected to drive. Based thereon, it is easy to determine an appropriate boundary line and significantly reduce the volume of data to be transmitted from the sensor device to a central control device of the comfort-oriented driver assistance system in order to describe the sensed objects.