G05D101/15

Brain-like memory-based environment perception and decision-making method and system for unmanned surface vehicle
12422859 · 2025-09-23 · ·

The present disclosure relates to the technical field of decision-making of unmanned surface vehicles, and provides a brain-like memory-based environment perception and decision-making method and system for an unmanned surface vehicle. The method includes: obtaining an image of an environment in front of an unmanned surface vehicle; and inputting the image of the environment into an environment perception and decision-making model of the unmanned surface vehicle, and outputting an action instruction, where the environment perception and decision-making model of the unmanned surface vehicle includes an image feature extractor, a Bidirectional Encoder Representations from Transformers (BERT) model, a fully connected layer, a short-term scene memory module, and a long-term memory module that are connected in turn; the BERT model extracts an image feature representation containing a text feature from an image feature. The present disclosure improves accuracy of decision-making of an action.

Multi-Agent Navigation
20250355448 · 2025-11-20 ·

Described herein is a method of performing autonomous navigation by deploying one or more nodes over a predetermined space such that the one or more nodes is trained based on predetermined set of traffic rules; deploying one or more agents in the predetermined space; determining a destination for each of the one or more agents; determining a path to the destination; querying at least one of the nodes associated with at least one of corresponding regions encompass a current position of the corresponding one or more agents; determining, by at least one of the nodes, a direction of travel; sending the preferred direction of travel to the corresponding one or more agents; enabling the corresponding one or more agents to travel in the preferred direction; and determining the current position of the corresponding one or more agents is equal to the assigned destination or not.

Machine learning based unmanned aerial anti-tampering trigger system design apparatus and method

An anti-tampering trigger system design apparatus includes an equipment simulation device that generates driving simulation data by simulating data output by equipment that is mounted on an unmanned aerial vehicle, a virtual unmanned aerial vehicle simulation device that generates a function result value by simulating a function of the unmanned aerial vehicle based on the driving simulation data, and generates a mission result value by simulating mission performance of the unmanned aerial vehicle using the function of the simulated unmanned aerial vehicle, and a machine learning device that performs machine learning of an anti-tampering trigger using the driving simulation data, the function result value, and the mission result value.

Using simulated environments to improve autonomous robot operation in real environments
12560946 · 2026-02-24 · ·

Disclosed are apparatuses, systems, and techniques that train and use trained language models to assist users with complex systems installation, troubleshooting, and/or maintenance. A method can include generating, for a real environment including a real robot having one or more real sensors, a simulated environment modeling the real environment, the simulated environment including a simulated robot corresponding to the real robot, the simulated robot including one or more simulated sensors corresponding to the one or more real sensors, obtaining simulated data based at least on simulated sensor data collected using the one or more simulated sensors, and using the simulated data to control operation of the real robot within the real environment.

Escalating hazard-response of dynamically stable mobile robot in a collaborative environment and related technology

A method in accordance with at least some embodiments of the present technology includes determining first hazard information about a human in an environment at a first time. The method further includes decelerating a mobile robot in the environment based at least partially on the first hazard information. The method further includes determining second hazard information about the human at a second time after the first time. The method further includes reconfiguring the mobile robot based at least partially on the second hazard information. Reconfiguring the mobile robot includes moving the mobile robot from a standing configuration to a non-standing configuration. The method further includes determining third hazard information about the human at a third time after the second time. Finally, the method includes causing a safe operating stop of the mobile robot based at least partially on the third hazard information.

Method for training migration scene-based trajectory prediction model and unmanned driving device

A method for training a migration scene-based trajectory prediction model is provided, a first trajectory prediction model and a plurality of candidate training samples are obtained; for any candidate training sample, a reference value corresponding to the candidate training sample is determined according to at least one of a trajectory feature corresponding to the candidate training sample or a prediction result of the first trajectory prediction model for the candidate training sample; target training samples are selected from the plurality of candidate training samples according to the reference values corresponding to the plurality of candidate training samples; and the first trajectory prediction model is trained according to the target training samples, to obtain a second trajectory prediction model, where the second trajectory prediction model is configured to predict traveling trajectories of obstacles in a migration scene.

Predictive modeling of aircraft dynamics

A computer-implemented method for predicting behavior of aircraft is provided. The method comprises inputting a current state of a number of aircraft into a number of hidden layers of a neural network, wherein the neural network is fully connected. An action applied to the aircraft is input into the hidden layers concurrently with the current state. The hidden layers, according to the current state and current action, determine a residual output that comprises an incremental difference in the state of the aircraft resulting from the current action. A skip connection feeds forward the current state of the aircraft, and the residual output is added to the current state to determine a next state of the aircraft.

Robot fleet management for value chain networks

A robot fleet management platform includes datastores configured to store a governance library defining governance standards. Processors execute computer-readable instructions to implement a governance-enabling intelligence layer that receives and responds to intelligence requests received from intelligence service clients. The intelligence layer includes artificial intelligence services including machine learning, rules-based intelligence, digital twin, robot process automation, and machine vision. The set of governance standards is applied to decisions made by one or more of the set of artificial intelligence services. An intelligence layer controller coordinates performance of the artificial intelligence services on behalf of the intelligence service clients and performance of analyses corresponding to the artificial intelligence services based on the set of governance standards. The intelligence layer returns decisions determined by the artificial intelligence services in response to the intelligence requests. The decisions are determined based on intelligence service data sources and the set of analyses.

Robotic navigation with simultaneous local path planning and learning

In conventional robot navigation techniques learning and planning algorithms act independently without guiding each other simultaneously. A method and system for robotic navigation with simultaneous local path planning and learning is disclosed. The method discloses an approach to learn and plan simultaneously by assisting each other and improve the overall system performance. The planner acts as an actuator and helps to balance exploration and exploitation in the learning algorithm. The synergy between dynamic window approach (DWA) as a planning algorithm and a disclosed Next best Q-learning (NBQ) as a learning algorithm offers an efficient local planning algorithm. Unlike the traditional Q-learning, dimension of Q-tree in the NBQ is dynamic and does not require to define a priori.

Method and apparatus for anomaly detection for individual vehicles in swarm system
12608026 · 2026-04-21 · ·

A method for detecting anomalies in a swarm system comprises: collecting first movement data from multiple vehicles moving as a swarm in a first scenario; generating first training data based on positioning data and second training data based on multi-channel inertial sensor data from the first movement data; training a first learning model using the first training data and multiple second learning models using the second training data for each vehicle; receiving real-time second movement data from vehicles moving as a swarm in a second scenario; generating first input data based on positioning data from the second movement data; inputting the first input data into the first learning model to detect abnormal vehicles in real-time; generating second input data for abnormal vehicles based on inertial sensor data from the second movement data; and inputting the second input data into the corresponding second learning model to identify abnormal channels in the inertial measurement unit of abnormal vehicles.