G05D2101/15

FAILURE PREDICTION AND RISK MITIGATION IN SMALL UNCREWED AERIAL SYSTEMS
20240210963 · 2024-06-27 ·

A computer-implemented system and associated method of operating a Small Uncrewed Aircraft System (SUAS) including at least one Small Uncrewed Aircraft or drone. The method comprises capturing data during operation of the SUAS from a number of sensors of different types, performing analysis on the captured data using one or more Artificial Intelligence/Machine Learning (AI/ML) models that have been trained on data sets including historical SUAS data and SUAS system fault data, to predict or identify a potential SUAS failure mode, and when a potential failure mode is predicted or identified, providing a course of action for further operation of the SUAS based on a severity and predicted timing of the SUAS failure mode.

METHODS AND INTERNET OF THINGS (IOT) SYSTEMS FOR SMART GAS PIPELINE MAINTENANCE BASED ON HUMAN-MACHINE LINKAGE

Methods and Internet of Things (IOT) systems for smart gas pipeline maintenance based on human-machine linkage are provided. The IoT system includes a smart gas user platform, a smart gas service platform, a smart gas pipeline network safety management platform, a smart gas pipeline network sensor network platform, and a smart gas pipeline network object platform. The method includes determining a first cycle based on data of a pipeline to be maintained, a feature of a maintainer, and/or a feature of a maintenance robot, obtaining, through a maintainer terminal and/or the maintenance robot, first feedback data based on the first cycle, determining, based on the first feedback data and the data of the pipeline to be maintained, a maintenance parameter and sending the maintenance parameter to the maintainer terminal, and generating, based on the maintenance parameter, a control instruction and sending the control instruction to the maintenance robot.

Method and System for Robot Navigation in Unknown Environments
20240192701 · 2024-06-13 ·

Broadly speaking, embodiments of the present techniques provide methods and systems for robot navigation in an unknown environment. In particular, the present techniques provide a navigation system comprising a navigating device and a sensor network comprising a plurality of static sensors. The sensor network is trained to predict a direction to a target object, and the navigating device is trained to reach the target object as efficiently as possible using information obtained from the sensor network.

LIFELONG ROBOT LEARNING FOR MOBILE ROBOTS

A method is disclosed for improving a mobile robot that is configured to perform a task in an environment using an operating procedure. Data is received that was recorded by the mobile robot using one or more sensors as the mobile robot navigates the environment to perform the task. A database and/or a model associated with the environment is updated to incorporate the recorded data. The operating procedure of the mobile robot can be modified, based on the database and/or the model, to generate a modified operating procedure for performing the task in the environment that improves a performance of the mobile robot. Additionally, a recommendation for improving the performance of the mobile robot when performing the task in the environment can be determined, based on the database and/or the model, and displayed to a user for consideration.

MOBILE ROBOT FOR DETERMINING WHETHER TO BOARD ELEVATOR, AND OPERATING METHOD THEREFOR
20240184305 · 2024-06-06 ·

A mobile robot for determining whether to board an elevator may include a camera configured for capturing an inside of the elevator, an object recognition unit configured for recognizing an area of the elevator and the number of passengers from an image captured by the camera, and a control unit configured for calculating a density of the elevator based on the area and the number of passengers. The control unit may perform a determination of whether to board the elevator based on the density, and control a driving wheel motor based on the determination.

UNMANNED PLATFORM WITH BIONIC VISUAL MULTI-SOURCE INFORMATION AND INTELLIGENT PERCEPTION
20240184309 · 2024-06-06 · ·

Disclosed is an unmanned platform with bionic visual multi-source information and intelligent perception. The unmanned platform is equipped with a bionic polarization vision/inertia/laser radar combined navigation module, a deep learning object detection module and an autonomous obstacle avoidance module; the bionic polarization vision/inertia/laser radar combined navigation module is configured to position and orient the unmanned platform in real time; the deep learning object detection module is configured to sense an environment around the unmanned platform according to RGB images of a surrounding environment collected by the bionic polarization vision/inertia/laser radar combined navigation module; and the autonomous obstacle avoidance module determines whether there are any obstacles around the unmanned platform during running according to the objects identified by the target, and performs autonomous obstacle avoidance in combination with the carrier navigation and positioning information. Concealment, autonomous navigation, object detection and autonomous obstacle avoidance capabilities of the unmanned platform are thus improved.

AUTONOMOUS DRIVER SYSTEM FOR AGRICULTURAL VEHICLE ASSEMBLIES AND METHODS FOR SAME

An autonomous driver system for an agricultural vehicle assembly includes a sensor interface configured for coupling with one or more of vehicle sensors or implement sensors and a function interface configured for coupling with one or more of vehicle actuators or implement actuators. An autonomous driving controller is in communication with the sensor and function interfaces. The autonomous driving controller is configured to autonomously implement a planned agricultural operation with the agricultural vehicle and the agricultural implement. The controller is configured to identify and remedy one or more operation disturbances outside of the planned agricultural operation including identifying the one or more operation disturbances with one or more of the vehicle or implement sensors and selecting one or more remedial actions for the one or more operation disturbances. The controller is configured to implement the one or more remedial actions with one or more of the vehicle or implement actuators.

ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING THEREOF

A method for controlling an electronic apparatus, includes: identifying whether an emergency context is occurred by obtaining a sound near the electronic apparatus; based on identifying that the emergency context is occurred, obtaining information relating to a location where a sound relating to the emergency context is generated based on information relating to the obtained sound; based on the information relating to the location, determining a first area corresponding to the location where the sound relating to the emergency context is generated; controlling a driver to move to the determined first area; obtaining information relating to the emergency context from the first area; based on context information and user history information, determining a second area corresponding to the location of the user; controlling the driver to move to the determined second area; and providing the information relating to the emergency context to the user in the second area.

AUTONOMOUS ROBOT AND ITS POSITION CORRECTION METHOD

An autonomous driving robot includes a driving unit that moves the autonomous robot; a camera; a traveling distance measurement sensor; and a control unit that estimates a location of the autonomous robot using a captured image and traveling distance information. In this case, the operation control program generates a robot viewpoint map based on the image captured by the camera, estimates a location of the autonomous robot based on the robot viewpoint map and the measured traveling distance information, and generates a global map based on the robot viewpoint map and position estimation information, and the operation control program inputs the generated robot viewpoint map and global map into a style-transfer model, and inputs a style-transferred robot viewpoint map and a style-transferred global map output by the style-transfer model into the operation agent to correct the estimated position.

INTENTION-DRIVEN REINFORCEMENT LEARNING-BASED PATH PLANNING METHOD
20240219923 · 2024-07-04 ·

The present invention discloses an intention-driven reinforcement learning-based path planning method, including the following steps: 1: acquiring, by a data collector, a state of a monitoring network; 2: selecting a steering angle of the data collector according to positions of surrounding obstacles, sensor nodes, and the data collector; 3: selecting a speed of the data collector, a target node, and a next target node as an action of the data collector according to an ? greedy policy; 4: determining, by the data collector, the next time slot according to the selected steering angle and speed; 5: obtaining rewards and penalties according to intentions of the data collector and the sensor nodes, and updating a Q value; 6: repeating step 1 to step 5 until a termination state or a convergence condition is satisfied; and 7: selecting, by the data collector, an action in each time slot having the maximum Q value as a planning result, and generating an optimal path. The method provided in the present invention can complete the data collection path planning with a higher probability of success and performance closer to the intention.