Patent classifications
G01C11/04
IMAGING RANGE ESTIMATION DEVICE, IMAGING RANGE ESTIMATION METHOD, AND PROGRAM
An imaging range estimation device includes an image data processor configured to acquire image data imaged by a camera device and generate image data with an object name label added, a reference data generator configured to set, by using geographic information, a region within a predetermined distance that is imageable from an estimated position at which the camera device is installed and generate reference data with an object name label added, and an imaging range estimator configured to calculate a concordance rate by comparing a feature indicated by a region of an object name label of the image data with a feature indicated by a region of an object name label of the reference data, and estimate the imaging range of the camera device to be a region of the reference data that corresponds to the image data.
Terrain trafficability assessment for autonomous or semi-autonomous rover or vehicle
A rover or semi-autonomous or autonomous vehicle may use an image classifier to determine a terrain class of regions of an image of the terrain ahead of the rover or vehicle. The regions of the images are used to estimate the slope of the terrain for the different regions. The terrain class and slope are used to predict an amount of slip the rover will experience when traversing the terrain of the different regions. A heuristic mapping for the terrain class may be applied to the predicted slip amount to determine a hazard level for the rover or vehicle traversing the terrain.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
SYSTEMS AND METHODS FOR MAPPING AN ENVIRONMENT
A method for mapping an environment by an electronic device is described. The method includes obtaining a set of sensor measurements. The method also includes determining a set of voxel occupancy probability distributions respectively corresponding to a set of voxels based on the set of sensor measurements. Each of the voxel occupancy probability distributions represents a probability of occupancy of a voxel over a range of occupation densities. The range includes partial occupation densities.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Surveying system with image-based measuring
A method for image-based point measurement includes moving a surveying system along a path through a surrounding and capturing a series of images of the surrounding. A subset of images are defined as frames and a subset of frames are defined as key-frames. Textures are identified in first and second frames and are tracked in successive frames to generate first and second frame feature lists. A structure from motion algorithm is used to calculate camera poses for the images based on the first and second frame feature lists. Corresponding image points in images of the series of images are identified using feature recognition in at least a plurality of images. Three-dimensional coordinates of the selected image point are determined using forward intersection with the poses of the subset of images in which corresponding image points are identified. The three-dimensional coordinates of the selected image point are presented to the user.
Surveying system with image-based measuring
A method for image-based point measurement includes moving a surveying system along a path through a surrounding and capturing a series of images of the surrounding. A subset of images are defined as frames and a subset of frames are defined as key-frames. Textures are identified in first and second frames and are tracked in successive frames to generate first and second frame feature lists. A structure from motion algorithm is used to calculate camera poses for the images based on the first and second frame feature lists. Corresponding image points in images of the series of images are identified using feature recognition in at least a plurality of images. Three-dimensional coordinates of the selected image point are determined using forward intersection with the poses of the subset of images in which corresponding image points are identified. The three-dimensional coordinates of the selected image point are presented to the user.
VISION-BASED NAVIGATION SYSTEM INCORPORATING MODEL-BASED CORRESPONDENCE DETERMINATION WITH HIGH-CONFIDENCE AMBIGUITY IDENTIFICATION
A vision-based navigation system (e.g., for aircraft on approach to a runway) captures via camera 2D images of the runway environment in an image plane. The vision-based navigation system stores a constellation database of runway features and their nominal 3D position information in a constellation plane. Image processors detect within the captured images 2D features potentially corresponding to the constellation features. The vision-based navigation system estimates optical pose of the camera in the constellation plane by aligning the image plane and constellation plane into a common domain, e.g., via orthocorrection of detected image features into the constellation plane or reprojection of constellation features into the image plane. Based on the common-domain plane, the vision-based navigational system generates candidate correspondence maps (CMAP) of constellation features mapped to the image features with high-confidence error bounding, from which optical pose of the camera or aircraft can be estimated.