Patent classifications
G06T2207/30256
CONTROLLING HOST VEHICLE BASED ON DETECTED DOOR OPENING EVENTS
Systems and methods are provided for navigating an autonomous vehicle. In one implementation, a system for navigating a host vehicle based on detecting a door opening event may include at least one processing device. The processing device may be programmed to receive at least one image associated with the environment of the host vehicle, analyze the at least one image to identify a side of a parked vehicle, identify a first structural feature of the parked vehicle and a second structural feature of the parked vehicle, identify a door edge of the parked vehicle in a vicinity of the first and second structural features, determine a change of an image characteristic of the door edge of the parked vehicle, and alter a navigational path of the host vehicle based at least in part on the change of the image characteristic of the door edge of the parked vehicle.
ELECTRONIC DEVICE, METHOD, AND COMPUTER READABLE STORAGE MEDIUM FOR OBTAINING LOCATION INFORMATION OF AT LEAST ONE SUBJECT BY USING PLURALITY OF CAMERAS
An electronic device mountable in a vehicle, include a plurality of cameras disposed toward different directions of the vehicle, a memory, and a processor. The processor obtains a plurality of frames obtained by the plurality of cameras which are synchronized with each other. The processor identifies, from the plurality of frames, one or more lines included in a road in which the vehicle is disposed. The processor identifies, from the plurality of frames, one or more subjects disposed in a space adjacent to the vehicle. The processor obtains, based on the one or more lines, information for indicating locations in the space of the one or more subjects in the space. The processor stores the obtained information in the memory.
Systems and methods for aligning map data
Systems, methods, and non-transitory computer-readable media can receive a geometric map and a semantic map associated with a geographic area, the semantic map comprising semantic data associated with vehicle navigation. A first semantic position estimate associated with a first piece of semantic data contained in the semantic map is generated based on semantic data location information associated with the first piece of semantic data. A final position for the first semantic position estimate is received. One or more three-dimensional semantic labels are applied to the geometric map based on the final position of the first semantic position estimate. A warped semantic map is generated. Generating the warped semantic map comprises warping the semantic map based on the one or more three-dimensional semantic labels.
Lane detection and tracking techniques for imaging systems
A method for detecting boundaries of lanes on a road is presented. The method comprises receiving, by one or more processors from an imaging system, a set of pixels associated with lane markings. The method further includes partitioning, by the one or more processors, the set of pixels into a plurality of groups. Each of the plurality of groups is associated with one or more control points. The method further includes generating, by the one or more processors, a spline that traverses the control points of the plurality of groups. The spline traversing the control points describes a boundary of a lane.
Sensor fusion for autonomous machine applications using machine learning
In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
SYSTEMS AND METHODS FOR VEHICLE SIGNAL LIGHT DETECTION
Systems and methods are provided for analyzing vehicle signal lights in order to operate an autonomous vehicle. A method includes receiving an image from a camera regarding a vehicle proximate to the autonomous vehicle. Data from a lidar sensor regarding the proximate vehicle is used to determine object information for identifying a subsection within the camera image. The identified subsection corresponds to an area of the proximate vehicle containing one or more vehicle signals. One or more vehicle signal lights of the proximate vehicle is located by using the identified camera image subsection as an area of focus.
Image recognition system for a vehicle and corresponding method
An image recognition system and method for a vehicle, including at least two camera units, each being configured to record an image of a road in the vicinity of the vehicle and to provide image data representing the respective image of the road, a first image processor configured to combine the image data provided by the at least two camera units into a first top-view image. The first top-view image is aligned to a road image plane, a first feature extractor configured to extract lines from the first top-view image, a second feature extractor configured to extract an optical flow from the first top-view image and a second top-view image, generated before the first top-view image by the first image processor, and a curb detector configured to detect curbs in the road based on the extracted lines and the extracted optical flow and provide curb data representing the detected curbs.
SYSTEMS AND METHODS FOR CREATING AND/OR ANALYZING THREE-DIMENSIONAL MODELS OF INFRASTRUCTURE ASSETS
Systems and methods for detecting, geolocating, assessing, and/or inventorying infrastructure assets. In some embodiments, a plurality of images captured by a moving camera may be used to generate a point cloud. A plurality of points corresponding to a pavement surface may be identified from the point cloud. The plurality of points may be used to generate at least one synthetic image of the pavement surface, the at least one synthetic image having at least one selected camera pose. The at least one synthetic image may be used to assess at least one condition of the pavement surface.
METHOD FOR PROCESSING MAP, ELECTRONIC DEVICE AND STORAGE MEDIUM
A method for processing a map, an electronic device, and a storage medium, which relate to a technical field of computer technology, in particular to computer vision technology and high-definition map technology. The method includes: segmenting a first road line to obtain a plurality of first sub-road lines, wherein the first road line is obtained according to a segmentation mask for an image, and the image corresponds to a target region; segmenting a second road line to obtain a plurality of second sub-road lines, wherein the second road line is obtained according to a trajectory information corresponding to the target region; and determining a target road line according to first similarities between the plurality of first sub-road lines and the plurality of second sub-road lines.
METHODS AND APPARATUS FOR AUTOMATIC COLLECTION OF UNDER-REPRESENTED DATA FOR IMPROVING A TRAINING OF A MACHINE LEARNING MODEL
In some embodiments, a method can include executing a first machine learning model to detect at least one lane in each image from a first set of images. The method can further include determining an estimate location of a vehicle for each image, based on localization data captured using at least one localization sensor disposed at the vehicle. The method can further include selecting lane geometry data for each image, from a map and based on the estimate location of the vehicle. The method can further include executing a localization model to generate a set of offset values for the first set of images based on the lane geometry data and the at least one lane in each image. The method can further include selecting a second set of images from the first set of images based on the set of offset values and a previously-determined offset threshold.