G05D1/2249

Augmented reality for teleoperation

A method to manage a vehicle is disclosed. The method may include obtaining vehicle inputs that may include an image captured by a vehicle camera. The method may further include estimating a time delay in teleoperation communication with the vehicle, and generating an augmented reality image based on the time delay and the vehicle inputs. The method may further include rendering the augmented reality image on a user interface to manage the vehicle.

System and method for autonomous vehicular development and operation

A system and method for developing and using an autonomous vehicle driving model includes using a small-scale remote-controlled (RC) vehicle for training an autonomous vehicle driving model to operate a vehicle on the road. The RC vehicle may be equipped with a camera on a gimbal, that can be turned based on the head movements of a driver operating the RC vehicle from a remote controlling station, and with a plurality of sensors that include force feedback sensors, an IMU, etc. Physiological data can be obtained by the driver while the driver remotely operates the RC vehicle via one or more physiological sensors monitoring the driver's heartbeat rate, blood pressure, head and body movements, etc., The autonomous vehicle driving model can be developed by correlating the driver's physiological state with the driving of the RC vehicle.

Remote operation control method, remote operation system, and moving body

A remote operation control method for controlling a remote operation of a moving body is provided. A video captured by a camera mounted on the moving body is transmitted to a remote operator terminal on a side of a remote operator remotely operating the moving body. The remote operation control method includes: setting an upper limit speed of the moving body during the remote operation to be lower as a quality of the video transmitted from the moving body to the remote operator terminal becomes lower or as an encoding and decoding time of the video becomes longer; and limiting a speed of the moving body during the remote operation to the upper limit speed or less regardless of an operation amount input by the remote operator.

Virtual presence for telerobotics in a dynamic scene

Described herein are methods and systems for providing virtual presence for telerobotics in a dynamic scene. A sensor captures frames of a scene comprising one or more objects. A computing device generates a set of feature points corresponding to objects in the scene and matches the set of feature points to 3D points in a map of the scene. The computing device generates a dense mesh of the scene and the objects using the matched feature points and transmits the dense mesh the frame to a remote viewing device. The remote viewing device generates a 3D representation of the scene and the objects for display to a user and receives commands from the user corresponding to interaction with the 3D representation of the scene. The remote viewing device transmits the commands to a robot device that executes the commands to perform operations on the objects in the scene.

COPILOT REPLACEMENT SYSTEM AND RELATED METHODS
20260003355 · 2026-01-01 ·

This disclosure relates to systems and methods for providing a copilot replacement system (CPRS) that enables dual-pilot or multi-pilot aircraft to be operated by a single onboard pilot. Amongst other things, the CPRS solutions can include components that autonomously execute functions traditionally performed by an onboard copilot and/or can establish connections with one or more copilot ground base stations (GBSs) that enable ground-based copilots to remotely provide assistance with operating the aircraft.

Display augmentation

A method is provided, comprising: (i) receiving, at a system, from an autonomous vehicle: (a) sensor data captured by a sensor of the vehicle, and (b) output data generated based on environmental data captured by one or more sensors of the vehicle and used by a planning component to navigate in an environment, (ii) causing one or more displays to display at least one of: a representation of the sensor data, or a model of the environment based on the output data, (iii) determining a location of a feature within the sensor data or the model, (iv) causing the one or more displays to display an indication of the feature at a position corresponding to the location, (v) receiving, at the system, user input, and (vi) sending, by the system to the vehicle, data based on the user input to cause the vehicle to take an action.

PROCESSING SYSTEM, PROCESSING METHOD, AND STORAGE MEDIUM THEREOF
20260079488 · 2026-03-19 ·

A processing system includes at least one processor, which is configured to: acquire a required content of user service; acquire service capability information representing provision capabilities of the user service, which are associated with autonomous traveling devices waiting in a visual field area visually recognized by the user through a wearable terminal worn by the user; search for a target autonomous traveling device whose provision capability matches the required content among the provision capabilities represented by the service capability information for the respective autonomous traveling devices in the visual field area; display, in a superimposed manner, an XR enhanced image, which highlights the target autonomous traveling device, on the visual field area; and provide the user service by driving, within the visual field area, the target autonomous traveling device selected by the user in response to superimposed display of the XR enhanced image.

Enhancements to beyond-visual-line-of-sight (BVLOS) operation of remote-controlled apparatuses

Beyond-visual-line-of-sight (BVLOS) control of drones is enhanced using wireless communication via a cellular network. In an example system, a drone and a remote control station are configured to communicate via a 5G network. Drone control is further enhanced with extended reality (XR) display. Video stream data captured by the drone is replaced, supplemented, and/or overlaid with XR visual data in an XR environment. The XR visual data corresponds to a perspective from a virtual location in the XR environment that mirrors a real-world location of the drone. This lightens constraints and requirements associated with the network communication of video stream data from the drone for BVLOS control of the drone. Drone control is further enhanced with voice command capability. A user vocally utters a coarse-resolution or abstract command, and a drone control station translates the utterance into a sequence of fine-resolution drone commands that implement the vocally-uttered command.