Patent classifications
G06T2207/30256
Image annotation
A method of annotating road images, the method comprising implementing, at an image processing system, the following steps: receiving a time sequence of two dimensional images as captured by an image capture device of travelling vehicle; processing the images to reconstruct, in three-dimensional space, a path travelled by the vehicle; using the reconstructed vehicle path to determine expected road structure extending along the reconstructed vehicle path; and generating road annotation data for marking at least one of the images with an expected road structure location, by performing a geometric projection of the expected road structure in three-dimensional space onto a two-dimensional plane of that image.
Method and device for detecting lanes, driver assistance system and vehicle
A method of detecting lanes includes the steps: capturing (S1) a camera image (K) of a vehicle environment by a camera device (2) of a vehicle (5); determining (S2) feature points (P1 to P15) in the camera image (K), which feature points correspond to regions of possible lane boundaries (M1, M2); generating (S3) image portions of the captured camera image (K) respectively around the feature points (P1 to P15); analyzing (S4) the image portions using a neural network to classify the feature points (P1 to P15); and determining (S5) lanes in the vehicle environment taking account of the classified feature points (P1 to P15).
Self-localization estimation device
A self-localization estimation device includes: a map-information acquisition unit that acquires map information including lane information for specifying lanes in which vehicles are enabled to travel; a position calculation unit that calculates an own-vehicle absolute position being an absolute position of an own vehicle in response to navigation signals received from a plurality of navigation satellites, the position calculation unit including a self-location measurement unit, a vehicle-momentum measurement unit, and dead reckoning unit; and a position estimation unit that estimates, based on the map information and the own-vehicle absolute position, a corrected own-vehicle position being a corrected position of the own vehicle. The position estimation unit estimates the corrected own-vehicle position by superimposing a reliability of the map information and a reliability of the own-vehicle absolute position on each other.
OPERATING AN AUTONOMOUS VEHICLE ACCORDING TO ROAD USER REACTION MODELING WITH OCCLUSIONS
The disclosure provides a method for operating an autonomous vehicle. To operate the autonomous vehicle, a plurality of lane segments that are in an environment of the autonomous vehicle is determined and a first object and a second object in the environment are detected. A first position for the first object is determined in relation to the plurality of lane segments, and particular lane segments that are occluded by the first object are determined using the first position. According to the occluded lane segments, a reaction time is determined for the second object and a driving instruction for the autonomous vehicle is determined according to the reaction time. The autonomous vehicle is then operated based on the driving instruction.
Method and apparatus for calibrating the extrinsic parameter of an image sensor
A method for calibrating one or more extrinsic parameters of an image sensor includes selecting a first set of parallel feature edges appearing in an image frame captured by the image sensor and determining reference vanishing points for the first set of parallel feature edges. Selecting a second set of parallel feature edges and projecting a plurality of points from the second set parallel feature edges onto the projection reference frame of the image sensor. The method determines the second set of parallel feature edges, second vanishing points located on the projection reference frame and reduces any deviation in location of the second vanishing points from the reference vanishing points until the deviation is within acceptable predefined limits by recursively: modifying the pre-existing projection matrix, projecting a plurality of points from the second set parallel feature edges onto the projection reference frame and determining the second vanishing points after projecting.
Road surface area detection device
A road surface area detection device includes a normalized speed calculation portion configured to calculate a normalized speed based on a movement of a feature point in an image captured by a camera that is disposed in a vehicle; a determination range calculation portion configured to calculate a road surface determination range, which is indicated by a magnitude of the normalized speed, based on the normalized speeds of at least two feature points at different positions in a width direction of the vehicle in a predetermined central area where the vehicle is positioned in a center thereof in the width direction perpendicular to a vehicle traveling direction; and a road surface area identification portion configured to identify, as a road surface area on which the vehicle travels, a position in the width direction that includes the feature point whose normalized speed is within the road surface determination range.
Device and method for calibrating camera for vehicle
In accordance with an aspect of the present disclosure, there is provided a method of calibrating a camera for a vehicle, comprising: obtaining attitude angle information of the vehicle by using a traveling direction of the vehicle obtained based on a satellite signal, and a vertical direction from ground obtained based on a high definition map; obtaining attitude angle information of the camera mounted on the vehicle by matching an image captured by the camera to the high definition map; and obtaining coordinate system transformation information between the vehicle and the camera by using the attitude angle information of the vehicle and the attitude angle information of the camera.
Map system, method and non-transitory computer-readable storage medium for autonomously navigating vehicle
A map system is a system for autonomously navigating a vehicle along a road segment and includes at least one processor. The processor acquires at least one image representing the environment of the vehicle from the imaging device, acquires the brightness of the environment of the vehicle, analyzes the image to calculate the position of the landmark with respect to the road on which the vehicle travels, determines the position of the own vehicle based on the position of the landmark calculated from the image and the map information stored in the server, and emits an illumination light in a direction in which a presence of the landmark is estimated when the brightness is equal to or less than a predetermined threshold value.
ROAD SURFACE INSPECTION APPARATUS, ROAD SURFACE INSPECTION METHOD, AND PROGRAM
A road surface inspection apparatus (10) includes an image acquisition unit (110), a damage detection unit (120), and an output unit (130). The image acquisition unit (110) acquires an input image in which a road is captured. The damage detection unit (120) detects a damaged part of the road in the input image by using a damage determiner (122) being built by machine learning and determining a damaged part of a road. The output unit (130) outputs, to a display apparatus (30), a determination result with a certainty factor equal to or less than a reference value in a state of being distinguishable from another determination result out of one or more determination results of a damaged part of a road by the damage determiner (122).
System and method for vehicle localization
The present embodiments relate to efficient localization of a vehicle within a lane region. An imaging device onboard the vehicle may capture an image stream depicting an environment surrounding the vehicle. An image may be inspected to identify pixel bands indicative of lane markings of a road depicted in the image. Based on the identified pixel bands, a lane region indicating a lane of a roadway can be extracted. A lane drift metric may be generated that indicates an offset of the vehicle relative to a center of the lane region. An output action may be initiated based on the offset indicated in the lane drift metric. The lane region can be translated from a first frame to a second frame providing a top-down perspective of the vehicle within the lane region using a transformation matrix to assist in efficiently deriving the lane drift metric.