Patent classifications
G06V20/182
SYSTEMS AND METHODS FOR TERRAIN MAPPING USING LIDAR
Systems and methods for generating ground-level terrain elevation models, preparing vector street data to assist in generating such models, and finding approximate elevation of any point using such terrain models are provided. Lidar data can be analyzed, and Lidar elevation values at roadway/street intersections can be used to determine a model of the ground-level elevation in an area or region. Outliers can be removed. The ground-level elevation at any point in the mapped area can be determined using elevation levels for nearby roadway intersections.
ROADWAY OCCLUSION DETECTION AND REASONING
A method for updating a map including receiving a first image depicting a geographical area including a first roadway and an occluded area, determining a location of the first roadway segment in response to the first image, receiving a plurality of vehicle telemetry data associated with the first roadway segment and a second roadway segment within the occluded area, updating a map data with the location of the first roadway, determining a location of the occluded area in response to the first image and the plurality of vehicle telemetry data associated with the second roadway segment, requesting an alternate data in response to determination of the location of the occluded area, determining a location of a second roadway segment in response to the alternate data wherein the second roadway segment was occluded in the first image, and updating the map data with the location of the second roadway segment.
LOCALIZATION OF INDIVIDUAL PLANTS BASED ON HIGH-ELEVATION IMAGERY
Implementations are described herein for localizing individual plants using high-elevation images at multiple different resolutions. A first set of high-elevation images that capture the plurality of plants at a first resolution may be analyzed to classify a set of pixels as invariant anchor points. High-elevation images of the first set may be aligned with each other based on the invariant anchor points that are common among at least some of the first set of high-elevation images. A mapping may be generated between pixels of the aligned high-elevation images of the first set and spatially-corresponding pixels of a second set of higher-resolution high-elevation images. Based at least in part on the mapping, individual plant(s) of the plurality of plants may be localized within one or more of the second set of high-elevation images for performance of one or more agricultural tasks.
Method and apparatus for the detection and labeling of features of an environment through contextual clues
Described herein are methods of detecting and labeling features within an image of an environment. Methods may include: receiving sensor data from an image sensor, where the sensor data is representative of a first image including an aerial view of a geographic region; detecting, using a perception module, at least one vehicle within the image of the geographic region; identifying an area around the at least one vehicle as a road segment in response to detecting the at least one vehicle; based on the identification of the area around the vehicle as a road segment, identifying features within the area as road features based on a context of the area; generating a map update for the road features of the road segment; and causing a map database to be updated with the road features of the road segment.
LOCALIZATION OF INDIVIDUAL PLANTS BASED ON HIGH-ELEVATION IMAGERY
Implementations are described herein for localizing individual plants by aligning high-elevation images using invariant anchor points while disregarding variant feature points, such as deformable plants. High-elevation images that capture the plurality of plants at a resolution at which wind-triggered deformation of individual plants is perceptible between the high-elevation images may be obtained. First regions of the high-elevation images that depict the plurality of plants may be classified as variant features that are unusable as invariant anchor points. Second regions of the high-elevation images that are disjoint from the first set of regions may be classified as invariant anchor points. The high-elevation images may be aligned based on invariant anchor point(s) that are common among at least some of the high-elevation images. Based on the aligned high-elevation images, individual plant(s) may be localized within one of the high-elevation images for performance of one or more agricultural tasks.
Method for flood disaster monitoring and disaster analysis based on vision transformer
A method for flood disaster monitoring and disaster analysis based on vision transformer is provided. It includes: step (1), constructing a bi-temporal image change detection model based on vision transformer; step (2), selecting bi-temporal remote sensing images to make flood disaster labels; and step (3), performing flood monitoring and disaster analysis according to the bi-temporal image change detection model constructed in the step (1). In combination with the bi-temporal image change detection model based on an advanced vision transformer in deep learning and radar data which is not affected by time and weather and has strong penetration ability, data when floods occur can be obtained and recognition accuracy is improved.
METHOD, APPARATUS, AND SYSTEM FOR DETECTING AND CODING A ROAD STACK INTERCHANGE BASED ON IMAGE DATA
An approach is provided for detecting and coding a grade-separated road intersection based on image data. The approach involves, for example, retrieving an image depicting a road intersection from a top-down perspective. The road intersection comprises two or more road links. The approach also involves processing the image to determine continuity data of the two or more road links. The continuity data represents a visual continuity of respective depictions of the two or more road links in the image. The approach further involves determining a stacking order of the two or more road links based on the continuity data. The approach further involves providing the stacking order as an output.
Electronic device for providing visual localization based on outdoor three-dimension map information and operating method thereof
Electronic devices and/or operating methods of the electronic device to provide visual localization based on outdoor three-dimensional (3D) map information may be provided. Such electronic devices may be configured to acquire two-dimensional (2D) image information about an outdoor environment, generate 3D map information about the outdoor environment based on the 2D image information, and determine a position of a point in the 3D map information corresponding to a query image.
ELECTRIC GRID CONNECTION MAPPING
Methods, systems, and apparatus, including computer programs encoded on a storage device, for predicting connections in electric grid models are disclosed. A method includes obtaining geospatial data representing a geographic area that includes an electrical distribution system; and generating, from the geospatial data, asset data that represents characteristics of electrical distribution system assets. The asset data includes: load data representing electrical loads of the electrical distribution system; and node data representing nodes of the electrical distribution system. The method includes processing the asset data using a connection model that is configured to predict electrical connections between assets of the electrical distribution system; and obtaining, from the connection model; output data indicating predicted electrical connections between assets of the electrical distribution system. The geospatial data includes at least one of overhead imagery or street level imagery of the geographic area.
Method and apparatus for matching heterogeneous feature spaces
An approach is provided for fully-automated learning to match heterogeneous feature spaces for mapping. The approach involves determining a first feature space comprising first features and a second feature space comprising second features, and classified by a feature detector into a first attribution category and a second attribution category, respectively. The approach further involves calculating a first similarity score for the first feature space based on a first distance metric applied to the first features, and a second similarity score for the second feature space based on a second distance metric applied to the second features. The approach also involves determining a transformation space comprising a first weight to be applied to the first similarity score and a second weight to be applied to the second similarity score based on matching the first attribution category and the second attribution category.