G05D2201/0218

Terrain trafficability assessment for autonomous or semi-autonomous rover or vehicle

A rover or semi-autonomous or autonomous vehicle may use an image classifier to determine a terrain class of regions of an image of the terrain ahead of the rover or vehicle. The regions of the images are used to estimate the slope of the terrain for the different regions. The terrain class and slope are used to predict an amount of slip the rover will experience when traversing the terrain of the different regions. A heuristic mapping for the terrain class may be applied to the predicted slip amount to determine a hazard level for the rover or vehicle traversing the terrain.

System and method for navigating a sensor-equipped mobile platform through an environment to a destination

A method for navigating a sensor-equipped mobile platform through an through an environment to a destination, the method including: capturing a first image in a first state of illumination; capturing a second image in a second state of illumination; generating a difference image from said first image and said second image; locating an imaging target based on said difference image, said imaging target including a machine-readable code embedded therein, said machine-readable code including navigation vector data; extracting said navigation vector data from said machine-readable code; and using said extracted navigation vector data to direct the navigation of the mobile platform through the environment to a destination.

THREE-LAYER INTELLIGENCE SYSTEM ARCHITECTURE AND AN EXPLORATION ROBOT

A three-layer intelligence system architecture and an exploration robot are provided. The three-layer intelligence system architecture includes: a digital twin module for creating a virtual exploration environment and a virtual robot according to explored environment data acquired in real time by the exploration robot and robot data of the exploration robot; a virtual reality module for generating a process and a result of the virtual robot executing the control commands in the virtual exploration environment according to the virtual exploration environment, the virtual robot, and control commands of a control personnel for the exploration robot; and a man-machine fusion module for transmitting the control commands and showing the control personnel the process and the result of the virtual robot executing the control commands in the virtual exploration environment, and causing the exploration robot to execute the control commands after acquiring a feedback indicating that the control personnel confirms the control commands.

Traverse and trajectory optimization and multi-purpose tracking

Various examples are provided for object identification and tracking, traverse-optimization and/or trajectory optimization. In one example, a method includes determining a terrain map including at least one associated terrain type; and determining a recommended traverse along the terrain map based upon at least one defined constraint associated with the at least one associated terrain type. In another example, a method includes determining a transformation operator corresponding to a reference frame based upon at least one fiducial marker in a captured image comprising a tracked object; converting the captured image to a standardized image based upon the transformation operator, the standardized image corresponding to the reference frame; and determining a current position of the tracked object from the standardized image.

SYSTEMS FOR MULTI-VEHICLE COLLABORATION AND METHODS THEREOF

Some embodiments of the disclosure are directed to multi-vehicle collaboration. In some embodiments, a first vehicle is in communication with a second vehicle. In some embodiments, one or more processors of the first vehicle are configured to communicate with one or more processors of the second vehicle to perform a joint action. In some embodiments, the one or more processors of the first vehicle are configured to selectively control components of a suspension system, a motor assembly, and/or a latch system of the vehicle to perform a first part of the joint action. In some embodiments, the one or more processors of the second vehicle are configured to selectively control components of the suspension system, the motor assembly, and/or the latch system of the second vehicle to perform a second part of the joint action.

VIRTUAL PRESENCE FOR TELEROBOTICS IN A DYNAMIC SCENE
20210347053 · 2021-11-11 ·

Described herein are methods and systems for providing virtual presence for telerobotics in a dynamic scene. A sensor captures frames of a scene comprising one or more objects. A computing device generates a set of feature points corresponding to objects in the scene and matches the set of feature points to 3D points in a map of the scene. The computing device generates a dense mesh of the scene and the objects using the matched feature points and transmits the dense mesh the frame to a remote viewing device. The remote viewing device generates a 3D representation of the scene and the objects for display to a user and receives commands from the user corresponding to interaction with the 3D representation of the scene. The remote viewing device transmits the commands to a robot device that executes the commands to perform operations on the objects in the scene.