G05D1/0251

Density coordinate hashing for volumetric data

A particular voxel is identified within a volume and a hash table is used to obtain volumetric data describing the particular voxel within the volume. Values of x-, y- and z-coordinates in the volume associated with the particular voxel are determined an index value associated with the particular voxel is determined according to a hashing algorithm, where the index value is determined from summing weighted values of the x-, y- and z-coordinates, and the weighted values are based on a variable value corresponding to a dimension of the volume. A particular entry is identified in the hash table based on the index value, where the particular entry includes volumetric data, and the volumetric data identifies, for the particular voxel, whether the particular voxel is occupied.

Autonomous running device, running control method for autonomous running device, and running control program of autonomous running device
11531344 · 2022-12-20 · ·

Provided are an autonomous running device, a running control method for the autonomous running device, and a running control program of the autonomous running device that allow the autonomous running device to reach a destination while continuing estimation of its self-position. An autonomous running device includes a first position estimation unit that estimates the position of the autonomous running device on the basis of information about surroundings of the autonomous running device, produces information about the estimated position of the autonomous running device as first positional information, and updates the first positional information, a second position estimation unit that estimates the position of the autonomous running device on the basis of rotation amounts of wheels, produces information about the estimated position of the autonomous running device as second positional information, and updates the second positional information, and a control unit.

Sensor placement to reduce blind spots
11531114 · 2022-12-20 · ·

Systems and methods for imaging may include: receiving a first point cloud from a first LiDAR sensor mounted at a first location behind a vehicle grille, the first point cloud representing the scene in front of the vehicle, wherein as a result of the grille the scene represented by the first point cloud data is partially occluded with a first pattern of occlusion; receiving a second point cloud from a second LiDAR sensor mounted at a second location behind the vehicle grille; combining the first and second point clouds to generate a composite point cloud data set, wherein the first LiDAR sensor is located relative to the second LiDAR sensor such that when a point cloud data for the first optical sensor and the second optical sensor are combined, the first pattern of occlusion is at least partially compensated; and processing the combined point cloud data set.

System for automated exploration by an autonomous mobile device using markers based on image features

An autonomous mobile device (AMD) uses sensors to explore a physical space and determine the locations of obstacles. Simultaneous localization and mapping (SLAM) techniques are used by the AMD to designate as keyframes some images and their associated descriptors of features in the space. Each keyframe indicates a location and orientation of the AMD relative to those features. Anchors are specified relative to keyframes. A marker is specified relative to one or more anchors. Because markers are associated with features in the physical space, they maintain their association with the physical space through various processes such as SLAM loop closures. Markers may specify locations in the physical space, such as navigation waypoints, navigation destinations such as a goal pose for exploring an unexplored area, as an observation target to facilitate exploration, and so forth. Markers may also be used to specify block listed locations to be avoided during exploration.

SELECTIVELY CAUSING REMOTE PROCESSING OF DATA FOR AN AUTONOMOUS DIGGING OPERATION

A machine may include an implement; one or more sensor devices; and a controller. The controller may be configured to receive, from the one or more sensor devices, data regarding a ground surface on which the machine is to perform a digging operation; transmit the data to one or more remote computing devices to cause the one or more remote computing devices to generate digging information based on the data; and receive the digging information from the one or more remote computing devices. The digging information may include information identifying a sequence of digging locations in an area of the ground surface and information identifying corresponding dumping locations. Based on the digging information, the controller may be configured to cause the machine to navigate to a digging location of the digging locations, and cause the implement to initiate the digging operation at the digging location.

METHOD AND APPARATUS FOR MODELING AN ENVIRONMENT PROXIMATE AN AUTONOMOUS SYSTEM

A method and apparatus for modeling the environment proximate an autonomous system. The method and apparatus accesses vision data, assigns semantic labels to points in the vision data, processes points that are identified as being a drivable surface (ground) and performs an optimization over the identified points to form a surface model. The model is subsequently used for detecting objects, planning, and mapping.

LIGHT INTERFERENCE DETECTION DURING VEHICLE NAVIGATION
20220374638 · 2022-11-24 ·

In some examples, a processor may receive images from a camera mounted on a vehicle. The processor may generate a disparity image based on features in at least one of the images. In addition, the processor may determine at least one region in a first image of the received images that has a brightness that exceeds a brightness threshold. Further, the processor may determine at least one region in the disparity image having a level of disparity information below a disparity information threshold. The processor may determine a region of light interference based on an overlap between at least one region in the first image and at least one region in the disparity image, and may perform at least one action based on the region of light interference.

Control of a transportation vehicle
11507112 · 2022-11-22 · ·

A control system for a transportation vehicle comprises a sensor vehicle that has at least one sensor for scanning an environment, wherein the sensor vehicle is configured to move autonomously to the detected transportation vehicle, and a control unit for controlling the transportation vehicle on the basis of sensor data from the at least one sensor.

LONG-RANGE OBJECT DETECTION, LOCALIZATION, TRACKING AND CLASSIFICATION FOR AUTONOMOUS VEHICLES

Aspects of the disclosure relate to controlling a vehicle. For instance, using a camera, a first camera image including a first object may be captured. A first bounding box for the first object and a distance to the first object may be identified. A second camera image including a second object may be captured. A second bounding box for the second image and a distance to the second object may be identified. Whether the first object is the second object may be determined using a plurality of models to compare visual similarity of the two bounding boxes, to compare a three-dimensional location based on the distance to the first object and a three-dimensional location based on the distance to the second object, and to compare results from the first and second models. The vehicle may be controlled in an autonomous driving mode based on a result of the third model.

AI mobile robot for learning obstacle and method of controlling the same
11586211 · 2023-02-21 · ·

An artificial intelligence (AI) mobile robot and a method of controlling the same for learning an obstacle are configured to capture an image while traveling through an image acquirer, to store a plurality of captured image data, to determine an obstacle from image data, to set a response motion corresponding to the obstacle, and to operate the set response motion depending on the obstacle, and thus, the obstacle is recognized through the captured image data, the obstacle is easily determined by repeatedly learning an image, and the obstacle is determined before the obstacle is detected or from a time point of detecting the obstacle to perform an operation of a response motion, and even if the same detection signal is input when a plurality of different obstacles is detected, the obstacle is determined through the image and different operations are performed depending on the obstacle to respond to various obstacles, and accordingly, the obstacle is effectively avoided and an operation is performed depending on a type of the obstacle.