Patent classifications
G05D2105/87
SYSTEM AND METHOD FOR OPTIMIZING PATH EXPLORATION PARAMETERS BASED ON DEEP REINFORCEMENT LEARNING
The present invention relates to the technical field of path planning, and provides a deep reinforcement learning-based path exploration parameter optimization system. The system comprises: a variable parameter path planning module, configured to perform node exploration based on a deep reinforcement learning network, conduct collision detection on child nodes in a child node set, calculate cost values for all child nodes, and finally generate a loading and parking path using a Reeds-Shepp curve; an environmental state space modeling module, configured to perform regional division of obstacles around a current node and conduct environmental state space modeling; and a deep learning parameter optimization module, configured to construct a deep learning network to compute an optimal step size and an optimal steering angle, build a reward function to optimize the deep learning network, and simultaneously execute a training process of the deep learning network.
MAPPING CORRECTION METHOD AND RELATED APPARATUS
Disclosed in the embodiments of the present application are a mapping correction method and a related apparatus, which are applied to a mower. The method comprises: receiving, in response to an error occurring in a mapping trajectory of the lawn mowing robot, a correction instruction; determining a target correction point in the mapping trajectory while the lawn mowing robot is moving, the target correction point being located at a distance greater than a predetermined distance from a start point of the mapping trajectory; and returning to the target correction point, and continuing a mapping operation based on a correct mapping trajectory and the target correction point, the correct mapping trajectory being a portion of the mapping trajectory between the start point and the target correction point.
METHOD AND DEVICE FOR GENERATING DEPTH MAP
Disclosed is a depth map generation method and device. The method includes: acquiring an RGB color image via a monocular camera provided in a robot system; acquiring a three-dimensional point cloud via a light detection and ranging (LiDAR) sensor provided in the robot system; generating, from the three-dimensional point cloud, a sparse depth map including depth information for only some points in a given space; inputting the RGB color image and the sparse depth map into a pre-trained diffusion model; and generating, based on the diffusion model, a dense depth map including depth information for all points in the given space, in which the diffusion model is trained by introducing a loss function that reflects confidence, which is a numerical representation of confidence level in a prediction of the diffusion model.
ELECTRONIC DEVICE FOR EXPLORATION PATH PLANNING OF UNMANNED AERIAL VEHICLE AND OPERATING METHOD OF ELECTRONIC DEVICE
Disclosed is an electronic device which includes a position estimation unit that estimates position information of an unmanned aerial vehicle based on at least some of a plurality of sensing information received from the unmanned aerial vehicle, an environment mapping unit that generates mapping information based on remaining some of the plurality of sensing information, and an exploration path planning unit that generates a path graph based on the position information and the mapping information, explores a plurality of paths based on the path graph, calculates exploration gains for the candidate paths based on a distance from an obstacle and a past flight path, and determines an optimal exploration path based on the calculated exploration gains.
DRONE BASED PRECISION AGRICULTURE FIELD MANAGEMENT SYSTEM
Systems and methods for imaging a crop field, probing soil of a location of the field based on the imaging of the field to obtain diagnostic data, sampling soil of a location of the field based on the imaging of the field; and spraying the field based on the imaging of the field, the probing soil, or the sampling soil. A system uses a tractor having a spray system, a drone having a tool deployment module comprising a reel and a line, a drone launch pad comprising an imaging module attachable to and detachable from the line, a probing module attachable to and detachable from the line, a sample collection module attachable to and detachable from the line, and drone batteries, a controller to receive diagnostic data from the imaging module, the probing module, or the sample collection module, and to transmit spray instructions to the spray system.
METHOD AND SYSTEM FOR AUTONOMOUS ROBOT EXPLORATION BASED ON FRONTIER REGION LEARNING
A method and system for autonomous robot exploration based on frontier region learning are provided. The learning-based autonomous robot exploration system includes a frontier region detector configured to detect a frontier region corresponding to a real-time grid map by inputting the real-time grid map of a region in which a robot is located to a frontier region detection network, a distance predictor configured to measure a distance between the robot and each frontier point included in the detected frontier region, and a frontier point determiner configured to determine, among frontier points, a frontier point to which the robot moves for exploration based on the distance between the robot and each frontier point.
On-robot data collection
Systems and methods are provided for improved generation and selection of robot sensor data for manual annotation and/or use in training machine learning models used to operate robots. An on-robot controller can operate to determine a cross-modal inconsistency, that a temporally proximate target task was failed, and/or that a confidence in a model output indicate that particular sensor data should be transmitted to a remote system for human annotation and/or use in updating the machine learning model(s) of the robot. Embedding vector(s) representing such selected sensor data (e.g., representing common aspects across a population of sets of sensor data) could also be determined and transmitted to the robot. The robot could then determine embeddings for sensor data and, if the embeddings are similar enough to the transmitted embedding(s), the sensor data could be transmitted to the remote system for annotation and/or model updating.
INDOOR MAP CONSTRUCTION METHOD, ROBOT AND COMPUTER-READABLE STORAGE MEDIUM
An indoor map construction method includes: after a robot enters a mapping mode, acquiring initial point cloud data at a starting position; extracting, from the initial point cloud data, a first principal direction; after rotating the initial point cloud data according to the first principal direction, controlling the robot to travel a preset distance so as to acquire updated point cloud data; extracting a second principal direction from the updated point cloud data; when the first principal direction is different from the second principal direction, rotating the acquired point cloud data according to the second principal direction and controlling the robot to continue traveling, until indoor global point cloud data are obtained; extracting a third principal direction from the global point cloud data; and when the second principal direction is different from the third principal direction, rotating the global point cloud data according to the third principal direction.
Unmanned platform with bionic visual multi-source information and intelligent perception
Disclosed is an unmanned platform with bionic visual multi-source information and intelligent perception. The unmanned platform is equipped with a bionic polarization vision/inertia/laser radar combined navigation module, a deep learning object detection module and an autonomous obstacle avoidance module; the bionic polarization vision/inertia/laser radar combined navigation module is configured to position and orient the unmanned platform in real time; the deep learning object detection module is configured to sense an environment around the unmanned platform according to RGB images of a surrounding environment collected by the bionic polarization vision/inertia/laser radar combined navigation module; and the autonomous obstacle avoidance module determines whether there are any obstacles around the unmanned platform during running according to the objects identified by the target, and performs autonomous obstacle avoidance in combination with the carrier navigation and positioning information. Concealment, autonomous navigation, object detection and autonomous obstacle avoidance capabilities of the unmanned platform are thus improved.
Automated aerial data capture for 3D modeling of unknown objects in unknown environments
System and method are disclosed for multi-phase process of automated data capture for photogrammetry and 3D model building of an unknown object (311) in an unknown environment. Planner module (152) generates a flight plan (413) for a camera drone (110) to fly autonomously on a flight path along a virtual polygon grid (302) defined above the target object (311) during a survey phase. Model builder computer (153) receives a point cloud dataset (321) captured by LiDAR sensor on camera drone (301) during survey flight and constructs low resolution 3D mesh (331) of the target object (311). Planner module (152) generates a flight path (413) for camera drone inspection phase with virtual waypoints surrounding the target object (311) at a marginal distance from the surface defined by the low resolution 3D mesh (331). Model builder (153, 163) builds a high resolution 3D model (422) of the target object (311) using photogrammetry processing of high resolution images captured by camera drone (411, 412) during inspection phase.