Patent classifications
G05D1/2467
SEMANTIC ROBOT HAZARD AVOIDANCE WITH MULTI-MODAL PROMPTING
Systems and methods for semantic robot hazard avoidance with multi-modal prompting are provided. In one aspect, a method includes receiving a user input indicative of one or more hazards in an environment of the robot and image data indicative of the one or more hazards in the environment. The method also includes generating one or more segments of the image data. Each of the one or more segments corresponds to at least one of the one or more hazards indicated by the user input. The method further includes identifying a semantic label for each of the one or more segments, generating a hazard map including a location of each of the one or more segments and the corresponding semantic label, and navigating the robot through the environment based at least in part on the hazard map.
Robotic post
A semantic sensing system includes a processor, a memory, a plurality of wireless communication enabled devices and at least one sensing element, the memory storing a plurality of mapped endpoints wherein the processor is configured to apply semantic drift or entropy to determine non-affirmative circumstances based on inputs from the at least one sensing element to cause the system to perform semantic augmentation towards a first endpoint supervisor in relation with the non-affirmative determinations.
APPARATUS AND METHOD FOR GENERATING SEMANTIC MAP-BASED ROBOT DRIVING ROUTE PLAN FOR TRANSPORTATION VULNERABLE
An apparatus for generating a semantic map-based robot driving route plan for transportation vulnerable includes: a semantic map generation unit configured to generate a semantic map for an area where a driving robot is driving based on real-time location tracking data of the driving robot and semantic data on a surrounding environment; a safety zone generation unit configured to calculate heights for a plurality of objects recognized while the driving robot is driving on the generated semantic map and generate a safety zone for a specific object determined to be the transportation vulnerable among the plurality of objects based on the heights; and a driving route plan generation unit configured to generate a second driving route plan different from a first driving route plan in real time for the safety zone when the safety zone is generated while the driving robot is driving with the first driving route plan.
SEMANTIC LOCAL MAP GENERATION DEVICE AND METHOD
A semantic local map generation device may include a multi-sensor unit including a RGBD sensor and an inertial measurement unit (IMU) sensor attached to a body of a robot, and a data processing unit operatively connected to the multi-sensor unit and configured for estimating a pose of the robot and a semantic point cloud with respect to a driving region from sensor data obtained from the multi-sensor unit, and generate a semantic local map based on the estimated pose and the estimated semantic point cloud.
SEMANTIC-BASED ROBOTIC NAVIGATION AND MANIPULATION IN COMPLEX ENVIRONMENTS
A method of and system for navigation and manipulation for a robot can include obtaining, by at least one camera and at least one depth sensor, a first visual data set and translating the first visual data set into a continuous three-dimensional map. The three-dimensional map can include semantic information and geometric information. The method and system may further include receiving instruction data and converting the instruction data into at least one task for the robot within the continuous three-dimensional map.
MACHINE LEARNING-BASED SYSTEM AND METHOD FOR GENERATING SEMANTIC MAPS FOR OFFROAD AUTONOMY MACHINES
A mapping system for an autonomous mobile robot includes a 3D convolutional encoder network that generates 3D feature maps from 3D point cloud data. The network sequentially compresses the feature dimension of the 3D input data to reduce the computational complexity and enable feature extraction to be performed in substantially real-time. Skip connections connect the outputs of the encoder layers of the convolutional encoder network to counterpart decoder layers of a 2D convolutional decoder network. An attention-based 3D to 2D projection layer receives the 3D feature maps generated by the encoder layers via the skip connections and projects the 3D feature maps onto 2D BEV feature maps which are provided to the counterpart decoder layers as input. The projection layer automatically estimates ground level of 3D feature maps and filters out overhanging objects that are irrelevant to ground-level navigation.
Method and system for multi-object tracking and navigation without pre-sequencing
This disclosure relates generally to method and system for multi-object tracking and navigation without pre-sequencing. Multi-object navigation is an embodied AI task where object navigation only searches for an instance of at least one target object where a robot localizes an instance to locate target objects associated with an environment. The method of the present disclosure employs a deep reinforcement learning (DRL) based framework for sequence agnostic multi-object navigation. The robot receives from an actor critic network a deterministic local policy to compute a low-level navigational action to navigate along a shortest path calculated from a current location of the robot to the long-term goal to reach the target object. Here, a deep reinforcement learning network is trained to assign the robot with a computed reward function when the navigational action is performed by the robot to reach an instance of the plurality of target objects.
LANGUAGE-GROUNDED VEHICLE PATH PLANNING
A device includes a memory configured to store images representing scenes associated with a vehicle. The device includes one or more processors configured to obtain a set of images representing a scene associated with the vehicle. The one or more processors are configured to generate, based on the set of images, language-grounded scene tokens. The one or more processors are configured to provide the language-grounded scene tokens to a planning transformer to generate a path plan prediction for the vehicle.