Patent classifications
G06T2207/30256
Change point detection device and map information distribution system
A change point detection device includes a memory 170 that stores map information representing a structure associated with a traveling condition on and around a road, an object detection unit 162 that detects a shielding object 20 hiding the structure from an image acquired by an in-vehicle camera 110 mounted on a vehicle 100 and representing an environment around the vehicle 100, a collation unit 163 that eliminates the structure hidden by the shielding object 20 in the map information, collates the image with the map information, and calculates a coincidence degree between the image and the map information, and a change point detection unit 164 that determines, when the coincidence degree is less than or equal to a predetermined threshold value, that the structure represented in the image has a change point different from the corresponding structure represented in the map information.
Method and apparatus for processing an image of a road to identify a region of the image which represents an unoccupied area of the road
A method of processing an image of a scene including a road acquired by a vehicle-mounted camera to generate boundary data indicative of a boundary of an image region which represents an unoccupied area of the road, comprising: generating an LL sub-band image of an N.sup.th level of an (N+1)-level discrete wavelet transform, DWT, decomposition of the image by iteratively low-pass filtering and down-sampling the image N times, where N is an integer equal to or greater than one; generating a sub-band image of an (N+1).sup.th level by high-pass filtering the LL sub-band image of the N.sup.th level, and down-sampling a result of the high-pass filtering, such that the sub-band image of the (N+1).sup.th level has a pixel region having substantially equal pixel values representing the unoccupied area of the road in the image; and generating the boundary data by determining a boundary of the pixel region.
Locked pedestrian detection and prediction for autonomous vehicles
Embodiments is disclosed to detect a locked heading direction of a pedestrian and to predict a path for the pedestrian using the locked heading direction. According to one embodiment, a system perceives an environment of an autonomous driving vehicle (ADV) using one or more image capturing devices. The system detects a pedestrian in the perceived environment. The system determines a facing direction of the pedestrian relative to the ADV as one of left/right side, front, or back. If the facing direction of the pedestrian is determined to be front or back facing, the system determines a lane nearest to the pedestrian. The system projects the pedestrian onto the nearest lane to determine a lane direction at the projection. The system determines a heading direction for the pedestrian locking to the lane direction of the nearest lane based on a predetermined condition.
METHOD OF CALIBRATING CAMERAS
A method for calibrating at least one of the six-degrees-of-freedom of all or part of cameras in a formation positioned for scene capturing, the method comprising a step of initial calibration before the scene capturing. The step comprises creating a reference video frame which comprises a reference image of a stationary reference object. During scene capturing the method further comprises a step of further calibration wherein the position of the reference image of the stationary reference object within a captured scene video frame is compared to the position of the reference image of the stationary reference object within the reference video frame, and a step adapting the at least one of the six-degrees-of-freedom of a multiple cameras of the formation if needed in order to get an improved scene capturing after the further calibration.
SYSTEMS AND METHODS FOR LOCAL HORIZON AND OCCLUDED ROAD SEGMENT DETECTION
A system for navigating a host vehicle may include memory and at least one processor configured to receive a plurality of images acquired by a camera onboard the host vehicle; generate, based on analysis of the plurality of images, a road geometry model for a segment of road forward of the host vehicle; determine, based on analysis of at least one of the plurality of images, one or more indicators of an orientation of the host vehicle; and generate, based on the one or more indicators of orientation of the host vehicle and the road geometry model for the segment of road forward of the host vehicle, one or more output signals configured to cause a change in a pointing direction of a movable headlight onboard the host vehicle.
BOUNDING BOX ESTIMATION AND OBJECT DETECTION
Disclosed are techniques for estimating a 3D bounding box (3DBB) from a 2D bounding box (2DBB). Conventional techniques to estimate 3DBB from 2DBB rely upon classifying target vehicles within the 2DBB. When the target vehicle is misclassified, the projected bounding box from the estimated 3DBB is inaccurate. To address such issues, it is proposed to estimate the 3DBB without relying upon classifying the target vehicle.
METHOD AND APPARATUS FOR RECOGNIZING VEHICLE LANE CHANGE TREND
Example methods and apparatus for recognizing a vehicle lane change trend are described. One example method includes obtaining laser point cloud data of a detected target vehicle. A first distance relationship value between a center line of a lane in which the current vehicle is located and the target vehicle is obtained based on the laser point cloud data. A second distance relationship value between the center line of the lane and the target vehicle is obtained. First confidence of the first distance relationship values and second confidence of the second distance relationship values are calculated, and fusion distance relationship values are then calculated based on the first confidence and the second confidence. It is determined whether the target vehicle has a lane change trend based on the fusion distance relationship values.
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM
A vehicle control device according to an embodiment includes an imager configured to image surroundings of a host vehicle, a recognizer configured to recognize a surroundings situation of the host vehicle, a driving controller configured to control one or both of speed and steering of the host vehicle, and a controller configured to control the driving controller, wherein the controller determines a lane to which the object belongs, based on a positional relationship between left and right demarcation lines defining a host lane in which the host vehicle travels and left and right edges of an object present around the host vehicle, which are present on a two-dimensional image captured by the imager, and determines the lane to which the object belongs, based on the positional relationship between the recognized demarcation line and the edge of the object when one of the left and right demarcation lines is not recognized.
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM
A vehicle control device according to an embodiment includes an imager configured to image surroundings of a host vehicle, a recognizer configured to recognize a surroundings situation of the host vehicle, a driving controller configured to control one or both of speed and steering of the host vehicle on the basis of a result of the recognition of the recognizer, and a controller configured to control the driving controller on the basis of imaging content of the imager, and the controller sets relative position information with respect to the host vehicle for an object around the host vehicle present on a two-dimensional image captured by the imager, and corrects the recognition result of the recognizer on the basis of the set relative position information.
SENSOR-BASED INFRASTRUCTURE CONDITION MONITORING
A processing system including at least one processor may gather first imaging data associated with an infrastructure item via at least a first imaging device, where the first imaging data represents a first condition of the infrastructure item, and gather second imaging data associated with the infrastructure item via at least a second imaging device, where at least one of the at least the first imaging device or the at least the second imaging device is mounted on a vehicle, and where the infrastructure item is visible from a road on which the vehicle operates. The processing system may then determine a second condition of the infrastructure item in accordance with the second imaging data, where the second condition comprises a potentially unsafe condition of the infrastructure item, and generate a report of the potentially unsafe condition of the infrastructure item in response to the determining.