Patent classifications
G06T2207/30256
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR ESTIMATING ROAD EDGE
An apparatus for estimating a road edge includes a processor configured to estimate a trajectory of a vehicle, estimate the position of an edge of a road from images generated during travel of the vehicle by a camera, identify an undetected section when an image from which the edge of the road is not detected was generated, determine whether the road in the undetected section has a structure with a road edge, estimate the position of the edge of the road in the undetected section, based on the positions of the edge of the road in the sections in front of and behind the undetected section, when the road in the undetected section has a structure with an edge, and omit to estimate the position of the edge of the road in the undetected section when the road in the undetected section has a structure without an edge.
DEVICE AND METHOD FOR DETECTING ROAD MARKING AND SYSTEM FOR MEASURING POSITION USING SAME
A device for detecting a road marking and a method thereof. The device includes a camera that photographs a road image, and a controller that detects class information of plural pixels in the road image, recognizes lines based on the class information of each pixel, and detects a road marking located between the recognized lines.
Method for Operating a Driver Information System in an Ego-Vehicle and Driver Information System
A method for operating a driver information system in an ego-vehicle is provided, wherein an operating state of a trailer device of the ego-vehicle is recorded. A driver information display is generated and output, wherein the driver information display comprises an ego-object which represents the ego-vehicle. In addition, the ego-object is represented from behind in a perspective view and the driver information display also comprises a lane object which represents a road section lying in front of the ego-vehicle in the direction of travel. The driver information display also comprises a trailer object which is formed according to a recorded operating state of the trailer device and which represents an attached device.
System and method for automatic assessment of comparative negligence for one or more vehicles involved in an accident
A system and a method for automatic assessment of comparative negligence of vehicle(s) involved in an accident. The system receives one or more of a video input, an accelerometer data, a gyroscope data, a magnetometer data, a GPS data, a Lidar data, a Radar data, a radio navigation data and a vehicle state data for vehicle(s). The system automatically detects an occurrence of an accident and its timestamp. The system then detects an accident type of the accident, and a trajectory of the vehicle(s) based on the received data for the detected timestamp. A scenario of the accident is generated and compared with a parametrized accident guideline to generate a comparative negligence assessment for the vehicle(s) involved in the accident.
Data processing method, apparatus and terminal
At a computing system comprising one or more processors and memory, the computing system receives road data collected on a moving vehicle along a road, the road data comprising a two-dimensional streetscape image, a three-dimensional point cloud, and inertial navigation data, identifies, within the two-dimensional streetscape image, a ground region image corresponding to the road based on a spatial position relation of the two-dimensional streetscape image and the three-dimensional point cloud according to the inertial navigation data, and detects at least one target road traffic marking in the ground region image, determining three-dimensional coordinates of the at least one target road traffic marking based on the spatial position relation of the two-dimensional streetscape image and the three-dimensional point cloud.
Self-position estimation method and self-position estimation device
A self-position estimation method includes: detecting a relative position between a target present in surroundings of a moving object and the moving object; storing a position where the relative position is moved by the moved amount of the moving object, as target position data; selecting the target position data on the basis of reliability of the relative position of the target position data with respect to the moving object; and comparing the selected target position data with map information including the position information on the target present on a road or around the road, thereby estimating a self-position which is a current position of the moving object.
Movable carrier auxiliary system
A movable carrier auxiliary system includes an environment detecting device, a state detecting device, and a control device. The environment detecting device includes at least one image capturing module and an operation module. The image capturing module captures an environment image in a traveling direction of the movable carrier. The operation module detects whether there is at least one of a target carrier and a lane marking in the environment image captured in the traveling direction for generating a detection signal. The state detecting device detects a moving state of the movable carrier and generating a state signal. The control device continuously receives the detection signal and the state signal, and controls the movable carrier to follow the target carrier or the lane marking according to the detection signal and the state signal upon receiving the detection signal that there is the target carrier or the lane marking in the environment image.
METHOD AND SYSTEM FOR DETECTING POSITION RELATION BETWEEN VEHICLE AND LANE LINE, AND STORAGE MEDIUM
The present invention relates to the field of intelligent driving. Disclosed is a method for detecting the position relation between a vehicle and a lane line. The method for detecting the position relation between a vehicle and a lane line comprises: obtaining a vehicle model, the vehicle model being represented by a plurality of first coordinates in a world coordinate system; obtaining a lane line image, the lane line image being captured by a camera disposed on a vehicle; obtaining a calibration parameter of the camera; determining, according to the lane line image and the calibration parameter, a first line segment of a lane line mapped into the world coordinate system; and determining the position relation between the lane line and the vehicle according to the position relation between the first line segment and the plurality of first coordinates in the world coordinate system. According to the detection method, the position relation between the lane line and the vehicle can be determined without using a positioning system, so that the construction cost of intelligent driving is reduced.
SYSTEM AND METHOD FOR DETERMINING CAR TO LANE DISTANCE
A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.
MOBILE OBJECT CONTROL DEVICE, MOBILE OBJECT CONTROL METHOD, LEARNING DEVICE, LEARNING METHOD, AND STORAGE MEDIUM
Provided is a mobile object control device comprising a storage medium storing computer-readable commands and a processor connected to the storage medium, the processor executing the computer-readable commands to: acquire a subject bird's eye view image obtained by converting an image, which is photographed by a camera mounted in a mobile object to capture a surrounding situation of the mobile object, into a bird's eye view coordinate system; input the subject bird's eye view image into a trained model, which is trained to receive input of a bird's eye view image to output at least a three-dimensional object in the bird's eye view image, to detect a three-dimensional object in the subject bird's eye view image; detect a travelable space of the mobile object based on the detected three-dimensional object; and cause the mobile object to travel so as to pass through the travelable space.