Patent classifications
G06T2207/30256
Methods and apparatus for automatic collection of under-represented data for improving a training of a machine learning model
In some embodiments, a method can include executing a first machine learning model to detect at least one lane in each image from a first set of images. The method can further include determining an estimate location of a vehicle for each image, based on localization data captured using at least one localization sensor disposed at the vehicle. The method can further include selecting lane geometry data for each image, from a map and based on the estimate location of the vehicle. The method can further include executing a localization model to generate a set of offset values for the first set of images based on the lane geometry data and the at least one lane in each image. The method can further include selecting a second set of images from the first set of images based on the set of offset values and a previously-determined offset threshold.
METHOD OF ESTIMATING CURVATURE OF LANE IN FRONT OF VEHICLE AND LANE TRACKING CONTROL SYSTEM USING THE SAME
A method of estimating curvature of a lane in front of a vehicle includes obtaining a reference distance, a reference angle, a reference curvature, and a reference change-rate, based on an image captured by a front camera of the vehicle; calculating respective estimation distances, by which the specific portion of the vehicle will be estimated to be spaced apart from the first extension line, at a plurality of target distances, on a transverse straight-line of the vehicle spaced apart from the vehicle by a predetermined target distance in the forward direction along a second extension line extending from the specific portion of the vehicle in the forward direction of the vehicle, based on the reference distance, the reference angle, the reference curvature, and the reference change-rate; and calculating the curvature of the lane in front of the vehicle based on the respective estimation distances at the plurality of target distances.
DIVISION LINE RECOGNITION APPARATUS
A division line recognition apparatus including a detection part configured to detect an external situation around a subject vehicle, and an electronic control unit including a microprocessor and a memory connected to the microprocessor. The microprocessor is configured to perform generating a map including a division line information on a division line on a road based on the external situation detected by the detection part, setting an area of an external space detectable by the detection part, determining whether an end of the division line on the map is located at a boundary of the area of the external space, and adding a boundary information to the division line information when it is determined that the end of the division line is located at the boundary.
Systems and Methods for Vehicle Information Capture Using White Light
A method for capturing vehicle information utilizing white light illumination, the method comprising the steps of: capturing, from a camera by a computing device, two or more near-infrared (NIR) or infrared (IR) images of a license plate of a vehicle; determining, by the computing device, whether the license plate was captured; in response to a determination that the vehicle's license plate was captured, determining if two or more images containing contiguous images of the vehicle's license plate were captured; in response to a determination that two or more images containing contiguous images of the vehicle's license plate were captured, determining, by the computing device, a target illumination zone and a time that the vehicle will pass through the target illumination zone; determining whether the vehicle is in the target illumination zone; in response to a determination that the vehicle is in the target illumination zone, initiating a pulse, by the computing device, of a white light; capturing, from the camera by the computing device during the pulse, a white light image; and determining, by the computing device, a license plate number based on the white light image.
METHOD FOR DETERMINING AND TRANSMITTING SLANT VISIBILITY RANGE INFORMATION, AND AIRCRAFT COMPRISING A SYSTEM FOR MEASURING A SLANT VISIBILITY RANGE
A method for determining and transmitting slant visibility range information for a runway at a predetermined decision altitude, including a step of acquiring, by a first aircraft, an image of a surrounding scene located outside and ahead of the aircraft, a step of analyzing the image to detect visual runway references in the image and measure distance between the aircraft and each detected runway reference, a step of transmitting slant visibility range information to the ground station, implemented by the first aircraft, the information including a distance between a visual runway reference and the aircraft measured in the analysis step if at least one runway reference has been detected, or if not information according to which no visual runway reference was able to be detected, and a step of transmitting, by the ground station, slant visibility range information associated with the runway to at least one second aircraft in flight.
APPARATUS AND METHOD FOR ESTIMATING ROAD GEOMETRY
A processing device includes a first processor configured to detect a bounding box of a distant vehicle, in an input image generated by imaging the distant vehicle, and extract at least one feature of the distant vehicle. A second processor is configured to estimate a geometry of a road on which the distant vehicle is located, based on a position of at least one feature relative to at least a portion of the bounding box.
SIGN BACKSIDE MAPPING AND NAVIGATION
Systems and methods are provided for vehicle navigation. In one implementation, a host vehicle-based sparse map feature harvester system may include at least one processor programmed to receive a first image captured by a forward-facing camera and a second image captured by a rearward-facing camera and as the host vehicle travels along a road segment in a first direction; detect a semantic feature represented in the first image and a semantic feature represented in the second image, the semantic features being associated with predetermined object type classifications; identify position descriptors associated with the first semantic feature and the second semantic feature; receive position information indicative of positions of the forward-facing and rearward-facing cameras when the first and second images were captured; and cause transmission of drive information including the position descriptors and the position information to a remotely-located entity.
Vehicle intersection operation
A computer includes a processor and a memory, the memory storing instructions executable by the processor to collect a plurality of images of one or more targets at an intersection, input the images to a machine learning program to determine a number of the targets to which a host vehicle is predicted to yield at the intersection based on time differences between the plurality of images, and transmit a message indicating the number of the targets.
Method, apparatus, and system for comparing and assimilating road lane representations using geospatial data and attribute data
An approach is provided for comparing and assimilating road lane representations. The approach, for example, receiving two cartographic feature representations (e.g., digital road lane representations). The approach also involves computing a geometric similarity between the cartographic representations. The approach further involves processing attribute data associated with the cartographic feature representations to determine a semantic relationship between the representations. The approach further involves generating a recommendation with respect to assimilating the representations based on the geometric similarity and the semantic relationship, and providing the recommendation as an output.
PREDICTING THE FUTURE MOVEMENT OF AGENTS IN AN ENVIRONMENT USING OCCUPANCY FLOW FIELDS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for predicting the future movement of agents in an environment. In particular, the future movement is predicted through occupancy flow fields that specify, for each future time point in a sequence of future time points and for each agent type in a set of one or more agent types: an occupancy prediction for the future time step that specifies, for each grid cell, an occupancy likelihood that any agent of the agent type will occupy the grid cell at the future time point, and a motion flow prediction that specifies, for each grid cell, a motion vector that represents predicted motion of agents of the agent type within the grid cell at the future time point.