G05D1/628

System for path planning in areas outside of sensor field of view by an autonomous mobile device

An autonomous mobile device (AMD) moves around a physical space while performing tasks. The AMD may have sensors with fields of view (FOVs) that are forward-facing. As the AMD moves forward, a safe region is determined based on data from those forward-facing sensors. The safe region describes a geographical area clear of obstacles during recent travel. Before moving outside of the current FOV, the AMD determines whether a move outside of the current FOV keeps the AMD within the safe region. For example, if a path that is outside the current FOV would result in the AMD moving outside the safe region, the AMD modifies the path until poses associated with the path result in the AMD staying within the safe region. The resulting safe path may then be used by the AMD to safely move outside the current FOV.

Method, system and apparatus for dynamic task sequencing

A method in a navigational controller includes: obtaining (i) a plurality of task fragments identifying respective sets of sub-regions in a facility, and (ii) an identifier of a task to be performed by a mobile automation apparatus at each of the sets of sub-regions; selecting an active one of the task fragments according to a sequence specifying an order of execution of the task fragments; generating a path including (i) a taxi portion from a current position of the mobile automation apparatus to the sub-regions identified by the active task fragment, and (ii) an execution portion traversing the sub-regions identified by the active task fragment; during travel along the taxi portion, determining, based on a current pose of the mobile automation apparatus, whether to initiate execution of another task fragment; and when the determination is affirmative, updating the sequence to mark the other task fragment as the active task fragment.

Method, system and apparatus for dynamic task sequencing

A method in a navigational controller includes: obtaining (i) a plurality of task fragments identifying respective sets of sub-regions in a facility, and (ii) an identifier of a task to be performed by a mobile automation apparatus at each of the sets of sub-regions; selecting an active one of the task fragments according to a sequence specifying an order of execution of the task fragments; generating a path including (i) a taxi portion from a current position of the mobile automation apparatus to the sub-regions identified by the active task fragment, and (ii) an execution portion traversing the sub-regions identified by the active task fragment; during travel along the taxi portion, determining, based on a current pose of the mobile automation apparatus, whether to initiate execution of another task fragment; and when the determination is affirmative, updating the sequence to mark the other task fragment as the active task fragment.

System for spot cleaning by a mobile robot
11961285 · 2024-04-16 · ·

A system for enabling spot cleaning includes a mobile computing device and a mobile cleaning robot. The mobile computing device includes at least one camera configured to capture images of an environment, and at least one data processor configured to (a) establish, based at least in part on first information provided by the at least one image sensor, a coordinate system in the environment, (b) determine, based at least in part on second information provided by the at least one camera, a first set of coordinates of a region at a first location, (c) determine, based at least in part on third information provided by the at least one camera, a second set of coordinates of a mobile cleaning robot at a second location, (d) send the first set of coordinates and second set of coordinates, or coordinates of the first location relative to the second location, to the mobile cleaning robot, and (e) send an instruction to the mobile cleaning robot to request the mobile cleaning robot to travel to the first location.

Method and system for developing autonomous vehicle training simulations

Method and systems for generating vehicle motion planning model simulation scenarios are disclosed. The system receives a base simulation scenario with features of a scene through which a vehicle may travel. The system then generates an augmentation element with a simulated behavior for an object in the scene by: (i) accessing a data store in which behavior probabilities are mapped to object types to retrieve a set of behavior probabilities for the object; and (ii) applying a randomization function to the behavior probabilities to select the simulated behavior for the object. The system will add the augmentation element to the base simulation scenario at the interaction zone to yield an augmented simulation scenario. The system will then apply the augmented simulation scenario to an autonomous vehicle motion planning model to train the motion planning model.

MOBILE ROBOTS AND SYSTEMS WITH MOBILE ROBOTS
20240118709 · 2024-04-11 ·

Improved mobile robots and systems and methods thereof, described herein, can enhance security and monitoring services of grounds and property. And, such mobile robots and systems and methods thereof can enhance policing as well as customer service and help desk functionality. In some embodiments, the mobile robots and systems and methods thereof can enhance exploration, such as space exploration.

User interface for displaying object-based indications in an autonomous driving system

A vehicle has a plurality of control apparatuses, a user input, a geographic position component, an object detection apparatus, memory, and a display. A processor is also included and is programmed to receive the destination information, identify a route, and determine the current geographic location of the vehicle. The processor is also programmed to identify an object and object type based on object information received from the object detection apparatus and to determine at least one warning characteristic of the identified object based on at least one of: the object type, a detected proximity of the detected object to the vehicle, the location of the detected object relative to predetermined peripheral areas of the vehicle, the current geographic location of the vehicle, and the route. The processor is also configured to select and display on the display an object warning image based on the at least one warning characteristic.

User interface for displaying object-based indications in an autonomous driving system

A vehicle has a plurality of control apparatuses, a user input, a geographic position component, an object detection apparatus, memory, and a display. A processor is also included and is programmed to receive the destination information, identify a route, and determine the current geographic location of the vehicle. The processor is also programmed to identify an object and object type based on object information received from the object detection apparatus and to determine at least one warning characteristic of the identified object based on at least one of: the object type, a detected proximity of the detected object to the vehicle, the location of the detected object relative to predetermined peripheral areas of the vehicle, the current geographic location of the vehicle, and the route. The processor is also configured to select and display on the display an object warning image based on the at least one warning characteristic.

Autonomous platform guidance systems with task planning and obstacle avoidance
11953910 · 2024-04-09 · ·

The described positional awareness techniques employing sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to find new area to cover by a robot encountering an unexpected obstacle traversing an area in which the robot is performing an area coverage task. The sensory data are gathered from an operational camera and one or more auxiliary sensors.

Autonomous platform guidance systems with task planning and obstacle avoidance
11953910 · 2024-04-09 · ·

The described positional awareness techniques employing sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to find new area to cover by a robot encountering an unexpected obstacle traversing an area in which the robot is performing an area coverage task. The sensory data are gathered from an operational camera and one or more auxiliary sensors.