G05D1/2465

SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) METHOD
20240303856 · 2024-09-12 ·

A method for determining an ego pose of a mobile system and creating a surfel map of a surrounding area of the mobile system via an optimization problem represented by a factor graph includes the steps of: receiving environment sensor data generated by an environment sensor attached to the mobile system, wherein the environment sensor surveys the surrounding area of mobile system, and wherein the environment sensor data represent the surrounding area of the mobile system as a point cloud; generating surfels by converting the point cloud of the received environment sensor data into surfel data; identifying new surfels and known surfels in the generated surfels by comparing the surfel data with the surfel map; and adding a surfel factor for the known surfels to the factor graph and/or adding a surfel node and a surfel factor for the new surfels to the factor graph.

ERROR MAP SURFACE REPRESENTATION FOR MULTI-VENDOR FLEET MANAGER OF AUTONOMOUS SYSTEM

Current approaches to controlling robots from multiple vendors typically requires multiple software systems that define vendor-exclusive fleet manager or dispatch systems. Autonomous devices (e.g., robots, drones, vehicles) can be controlled from multiple vendors that use multiple locally sourced map. For example, maps from individual robots can be translated to a base map that can be used to command and control hybrid fleets of robots.

Augmented reality interface for authoring tasks for execution by a programmable robot

Disclosed is a visual and spatial programming system for robot navigation and robot-IoT task authoring. Programmable mobile robots serve as binding agents to link stationary IoT devices and perform collaborative tasks. Three key elements of robot task planning (human-robot-IoT) are coherently connected with one single smartphone device. Users can perform visual task authoring in an analogous manner to the real tasks that they would like the robot to perform with using an augmented reality interface. The mobile device mediates interactions between the user, robot(s), and IoT device-oriented tasks, guiding the path planning execution with Simultaneous Localization and Mapping (SLAM) to enable robust room-scale navigation and interactive task authoring.

IMPROVEMENTS IN IMAGE ACQUISITION PLANNING SYSTEMS AND METHODS USED TO GENERATE INFORMATION FOR STRUCTURES OF INTEREST
20240371157 · 2024-11-07 ·

The present disclosure relates to improvements in systems and methods in acquiring images via imaging devices, where such imaging devices can be configured, in some implementations, with an unmanned aerial vehicle or other vehicle types, as well as being hand-held. Images are acquired from the imaging devices according to capture plans where useful information types about a structure of interest (or objects, items, etc.) can be derived from a structure image acquisition event. Images acquired from capture plans can be evaluated to generate improvements in capture plans for use in subsequent structure image acquisition events. Capture plans provided herein generate accurate information as to all or part of the structure of interest, where accuracy is in relation to the real-life structure incorporated in the acquired images.

CONTROL SYSTEM, CONTROL METHOD, AND STORAGE MEDIUM
20240370027 · 2024-11-07 ·

A control system capable of generating route information about a movement route of a mobile object in a three-dimensional space defined according to a predetermined coordinate system formats spatial information about a type of physical object capable of existing in or entering a three-dimensional space in association with a unique identifier and generates the route information about the movement route of the mobile object on the basis of the spatial information and type information of the mobile object, thereby generating the route information about the movement route of the mobile object in the three-dimensional space.

CLUTTER TIDYING ROBOT UTILIZING FLOOR SEGMENTATION FOR MAPPING AND NAVIGATION SYSTEM

A method and apparatus are disclosed for a clutter tidying robot utilizing floor segmentation for its mapping and navigation system, whereby a perception module and navigation module transform lidar and image data from lidar sensors and cameras of a robot sensing system using segmentation and pseudo-laserscan or point cloud transformations to generate global and local maps. The robot pose and maps are transmitted to a robot brain that directs an action module to produce robot action commands controlling the operation of a clutter tidying robot using the pose and map data. In this manner multi-stage planning and sophisticated obstacle avoidance techniques may be incorporated into autonomous robot operations.

Method for controlling a vehicle for harvesting agricultural material

An upper point cloud estimator is configured to estimate a three-dimensional representation of the crop canopy based on collected stereo vision image data. A lower point cloud estimator is configured to estimate a ground three-dimensional representation or lower point of the ground based on the determined average. The electronic data processor is configured to determine one or more differences between the upper point cloud (or upper surface) of the crop canopy and a lower point cloud of the ground, where each difference is associated with a cell within a grid defined by the front region. The electronic data processor is capable of providing the differences to a data processing system to estimate a yield or differential yield for the front region, among other things.

THREE-DIMENSIONAL LOCALIZATION OF A DEVICE WITHIN A GRAIN BIN

A localization system comprises: a device; a master unit which wirelessly transmits a first localization signal; a plurality of lateration units distributed about the area within which the device is being localized, wherein each lateration unit of the plurality independently starts its own timer upon its receipt of the first localization signal; and a localization unit. The device receives the first localization signal and responsively wirelessly transmits a second localization signal. Each of the lateration units: independently receives the second localization signal; stops its respective timer responsive to receipt of the second localization signal; and wirelessly transmits a timer count signal to a localization unit. The timer count signal identifies the transmitting lateration unit and a count of its respective timer. The localization unit utilizes the plurality of timer along with respective distances between the master unit and the lateration units to localize the first device via time-of-flight lateration.

Enhanced object detection for autonomous vehicles based on field view
12198396 · 2025-01-14 · ·

Systems and methods for enhanced object detection for autonomous vehicles based on field of view. An example method includes obtaining an image from an image sensor of one or more image sensors positioned about a vehicle. A field of view for the image is determined, with the field of view being associated with a vanishing line. A crop portion corresponding to the field of view is generated from the image, with a remaining portion of the image being downsampled. Information associated with detected objects depicted in the image is outputted based on a convolutional neural network, with detecting objects being based on performing a forward pass through the convolutional neural network of the crop portion and the remaining portion.

CONTROL METHOD AND CONTROL SYSTEM

A vehicle system (1) switches control between a restart period after restart of a function of controlling traveling of the vehicle following a parking period during which the function is stopped, until the vehicle moves and first detects a magnetic marker and a normal travel period after the vehicle detects the magnetic marker following the restart period. In the restart period, restart control (S105) is performed in which a position of the vehicle is identified based on a position measured in the restart period to cause the vehicle to travel. In the normal travel period, normal travel control (S107) is performed in which the position of the vehicle is identified based on a position of the detected magnetic marker to cause the vehicle to travel.