Patent classifications
G06T2207/30256
METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR IDENTIFYING AND CORRECTING LANE GEOMETRY IN MAP DATA
A method is provided to using a machine learning model to predict lane geometry where incorrect or missing lane line geometry is detected. Methods may include: receiving a representation of lane line geometry for one or more roads of a road network; identifying an area within the representation including broken lane line geometry; generating a masked area of the area within the representation including the broken lane line geometry; processing the representation with the masked area through an inpainting model, where the inpainting model includes a generator network, where the representation is processed through the generator network which includes dilated convolution layers for inpainting of the masked area with corrected lane line geometry in a corrected representation; and updating a map database to include the corrected lane line geometry in place of the area including the broken lane line geometry based on the corrected representation.
METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR IDENTIFYING AND CORRECTING INTERSECTION LANE GEOMETRY IN MAP DATA
A method is provided to using a machine learning model to predict lane geometry where incorrect or missing lane line geometry is detected. Methods may include: receiving a representation of lane line geometry for one or more roads of a road network; identifying an area within the representation as broken lane line geometry of an intersection using a machine learning model; generating a masked area of the broken lane line geometry of the intersection within the representation; processing the representation with the masked area using an inpainting model to generate an inpainted result within the masked area of restored lane line geometry of the intersection, where the inpainting model is trained using a set of representations identified as lane line geometry of intersections; and updating a map database to include the restored lane line geometry of the intersection in place of the broken lane line geometry of the intersection.
VEHICLE DRIVING ASSIST APPARATUS
A vehicle driving assist apparatus executes a vehicle collision prevention control when a target object distance from a vehicle to a target object is equal to or smaller than a predetermined distance. The apparatus acquires the target object distance on the basis of a position of the target object in a camera image taken by a camera and a height of the camera in a situation that a movable load of the vehicle is a maximum load capacity.
DETECTION OF OBSTRUCTIONS
A system and method for detecting obstructions. The system includes a camera coupled to a vehicle and configured to capture image data from a vehicle, and a computing device that includes a processor configured to: detect an edge of a roadway on which the vehicle is traveling; detect objects located proximate an edge of the roadway, based on the captured image data; determine a location of each detected object, based on the captured image data; calculate a distance between each detected object and the edge of the roadway; determine that at least one object of the detected objects is an obstruction, based on at least the calculated distance between each object of the at least one object and the edge of the roadway being below a threshold; and transmit a message to an external device, said message indicating the location of each detected object determined to be an obstruction.
CROWD-SOURCED 3D POINTS AND POINT CLOUD ALIGNMENT
Systems and methods are provided for vehicle navigation. In one implementation, a host vehicle-based sparse map feature harvester system may include at least one processor programmed to receive a plurality of images captured by a camera onboard the host vehicle as the host vehicle travels along a road segment in a first direction, wherein the plurality of images are representative of an environment of the host vehicle; detect one or more semantic features represented in one or more of the plurality of images, the one or more semantic features each being associated with a predetermined object type classification; identify at least one position descriptor associated with each of the detected one or more semantic features; identify three-dimensional feature points associated with one or more detected objects represented in at least one of the plurality of images; receive position information, for each of the plurality of images, wherein the position information is indicative of a position of the camera when each of the plurality of images was captured; and cause transmission of drive information for the road segment to an entity remotely-located relative to the host vehicle, wherein the drive information includes the identified at least one position descriptor associated with each of the detected one or more semantic features, the identified three-dimensional feature points, and the position information.
NEURAL NETWORK-BASED METHOD AND APPARATUS FOR IMPROVING LOCALIZATION OF A DEVICE
A method, apparatus and computer program product are provided for improving localization of a device. In this regard, a coarse location of the device is determined and a neural network is utilized to determine one or more image-based vectors from the device to respective features in an image captured by an image capture device associated with the device. At one or more location points defined relative to the coarse location of the device, (a) one or more map-based vectors extending from a respective location point to respective features as defined by map data are compared to (b) the one or more image-based vectors. Based on the comparison, a refined location of the device is determined. A method, apparatus and computer program are also provided for training the neural network to determine an image-based vector from the device to a feature in an image captured by the device.
STOP DETECTION DEVICE AND STOP DETECTION METHOD
A stop detection device includes: an acquirer that acquires multiple captured images captured in chronological order by a camera mounted on a vehicle; an image processor that performs image processing on acquired multiple captured images; and a detector that detects vehicle speed of a vehicle at a stop position based on a result of image processing. The image processor includes a stop position identification unit that detects a marking indicating a stop position from a captured image to identify a stop position, and a vehicle speed calculation unit that calculates vehicle speed by comparing captured images in chronological order. The vehicle speed calculation unit excludes a specific region from each of multiple captured images to compare captured images.
ROAD PAINT FEATURE DETECTION
The disclosed technology provides solutions for enhanced road paint feature detection, for example, in an autonomous vehicle (AV) deployment. A process of the disclosed technology can include steps for receiving image data from a vehicle mounted camera, receiving height map data corresponding to a location associated with the image data, and calculating a region of interest that includes a portion of the image data determined based on the height map data. An image patch is generated by projecting the portion of image data included within the region of interest into a top-down view. The image patch is analyzed to detect one or more road paint features in the top-down view, and, in response to detecting an unlabeled road paint feature, the unlabeled road paint feature is localized based at least in part on the height map data. Systems and machine-readable media are also provided.
Navigation based on free space determination
Systems and methods navigate a vehicle by determining a free space region in which the vehicle can travel. In one implementation, a system may include at least one processor programmed to receive from an image capture device, a plurality of images associated with the environment of a vehicle, analyze at least one of the plurality of images to identify a first free space boundary on a driver side of the vehicle and extending forward of the vehicle, a second free space boundary on a passenger side of the vehicle and extending forward of the vehicle, and a forward free space boundary forward of the vehicle and extending between the first free space boundary and the second free space boundary. The first free space boundary, the second free space boundary, and the forward free space boundary may define a free space region forward of the vehicle. The at least one processor of the system may be further programmed to determine a navigational path for the vehicle through the free space region and cause the vehicle to travel on at least a portion of the determined navigational path within the free space region forward of the vehicle.
Marking line detection system
A marking line detection system includes an imaging device, a marking line candidate detection unit, a marking line correction unit, a center line determination unit and a validity determination unit. The marking line candidate detection unit detects first and second vehicle's marking line candidates. The marking line correction unit corrects a position of the second vehicle's marking line candidate based on a position of the first vehicle's marking line candidate. When the first vehicle's marking line candidate is the valid center line, the marking line correction unit performs a correction of the position of the second vehicle's marking line candidate based on the position of the first vehicle's marking line candidate. When the first vehicle's marking line candidate is not the valid center line, the marking line correction unit does not perform the correction based on the position of the first vehicle's marking line candidate.