Patent classifications
G01C21/3833
Horizon-based navigation
Systems, devices, methods, and computer-readable media for horizon-based navigation. A method can include receiving image data corresponding to a geographical region in a field of view of an imaging unit and in which the device is situated, based on the received image data, generating, by the processing unit, an image horizon corresponding to a horizon of the geographical region and from a perspective of the imaging unit, projecting three-dimensional (3D) points of a 3D point set of the geographical region to an image space of the received image data resulting in a synthetic image, generating, by the processing unit, a synthetic image horizon of the synthetic image, and responsive to determining the image horizon sufficiently correlates with the synthetic image horizon, providing a location corresponding to a perspective of the synthetic image as a location of the processing unit.
System and method for large-scale lane marking detection using multimodal sensor data
A system and method for large-scale lane marking detection using multimodal sensor data are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on a vehicle; receiving point cloud data from a distance and intensity measuring device mounted on the vehicle; fusing the image data and the point cloud data to produce a set of lane marking points in three-dimensional (3D) space that correlate to the image data and the point cloud data; and generating a lane marking map from the set of lane marking points.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
The present technology relates to an information processing apparatus, an information processing method, and a program capable of making a path plan avoiding a crowd.
A cost map indicating a risk of passing through a region is generated using crowd information. The present technology can be applied to unmanned aerial vehicle (UAV) traffic management (UTM) and the like that control a UAV, for example.
INFORMATION PROCESSING APPARATUS, MOVING BODY, METHOD FOR CONTROLLING INFORMATION PROCESSING APPARATUS, AND RECORDING MEDIUM
An information processing apparatus includes: a shape information acquiring unit 204 configured to acquire shape information of a surrounding environment of a moving body measured by a sensor mounted in the moving body; a position and posture acquiring unit configured to acquire position and posture information of the sensor; a correction state acquiring unit configured to acquire a performance state relating to a process of correcting the position and posture information; a priority level determining unit configured to determine a priority level of an area for generating a map; and a map generating unit configured to generate the map on the basis of the shape information and the position and posture information acquired at the time of acquisition of the shape information, in which the map generating unit generates the map in order from an area of which the priority level is high in accordance with the performance state.
APPARATUS AND METHOD FOR UPDATING MAP AND NON-TRANSITORY COMPUTER-READABLE MEDIUM CONTAINING COMPUTER PROGRAM FOR UPDATING MAP
An apparatus for updating a map detects the positions of reference points corresponding to a feature on a road being traveled by a vehicle from surrounding data representing features around the vehicle, and updates probability distributions associated with the respective reference points so that the probabilities of existence of the reference points at the detected positions of the reference points increase. Each of the probability distributions indicates the probability of existence of the corresponding reference point as a function of position.
Training data generation for dynamic objects using high definition map data
According to an aspect of an embodiment, operations may comprise receiving a plurality of frame sets generated while navigating a local environment, receiving an occupancy map (OMap) representation of the local environment, for each of the plurality of frame sets, generating, using the OMap representation, one or more instances each comprising a spatial cluster of neighborhood 3D points generated from a 3D sensor scan of the local environment, and classifying each of the instances as dynamic or static, tracking instances classified as dynamic across the plurality of frame sets using a tracking algorithm, assigning a single instance ID to tracked instances classified as dynamic across the plurality of frame sets, estimating a bounding box for each of the instances in each of the plurality of frame sets, and employing the instances as ground truth data in a training of one or more deep learning classifiers.
SYSTEM AND METHOD FOR AUTOMATED PARKING OF A VEHICLE
A method of parking a vehicle includes creating a map of a parking environment utilizing data from at least one sensor coupled to a vehicle. The method further includes storing the map in a data storage device. The method also includes receiving a first learn signal indicating that the vehicle is located in a first parking position. The method further includes determining a first set of spatial data indicative of the location of the vehicle relative to the map in response to receiving the first learn signal. The method also includes storing the first set of spatial data in the data storage device.
SYSTEMS AND METHODS OF COOPERATIVE DEPTH COMPLETION WITH SENSOR DATA SHARING
Systems and methods are provided for utilizing sensor data from sensors of different modalities and from different vehicles to generate a combined image of an environment. Sensor data, such as a point cloud, generated by a LiDAR sensor on a first vehicle may be combined with sensor data, such as image data, generated by a camera on a second vehicle. The point cloud and image data may be combined to provide benefits over either data individually and processed to provide an improved image of the environment of the first and second vehicles. Either vehicle can perform this processing when receiving the sensor data from the other vehicle. An external system can also do the processing when receiving the sensor data from both vehicles. The improved image can then be used by one or both of the vehicles to improve, for example, automated travel through or obstacle identification in the environment.
Methods and apparatus for navigating an autonomous vehicle based on a map updated in regions
In an embodiment, a method comprises detecting, at a processor of an autonomous vehicle, a discrepancy between a map and a property sensed by at least one sensor onboard the autonomous vehicle, the property being associated with an external environment of the autonomous vehicle. In response to detecting the discrepancy, and based on the discrepancy, an annotation for the map is generated via the processor. A signal representing the annotation is caused to be transmit to a compute device that is remote from the autonomous vehicle. A signal representing a map update is received from the compute device that is remote form the autonomous vehicle. The map update is generated based on the annotation, the map update (1) including replacement information for a region of the map associated with the annotation, and (2) not including replacement information for a remainder of the map.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
To provide an information processing device configured to: acquire correspondence information between a key frame and a query image, the key frame being disposed in advance on map data; and combine a plurality of pieces of the map data on the basis of the correspondence information.