Patent classifications
G05D1/245
Autonomous platform guidance systems with task planning and obstacle avoidance
The described positional awareness techniques employing sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to find new area to cover by a robot encountering an unexpected obstacle traversing an area in which the robot is performing an area coverage task. The sensory data are gathered from an operational camera and one or more auxiliary sensors.
Mobile robot system and method for generating map data using straight lines extracted from visual images
A mobile robot is configured to navigate on a sidewalk and deliver a delivery to a predetermined location. The robot has a body and an enclosed space within the body for storing the delivery during transit. At least two cameras are mounted on the robot body and are adapted to take visual images of an operating area. A processing component is adapted to extract straight lines from the visual images taken by the cameras and generate map data based at least partially on the images. A communication component is adapted to send and receive image and/or map data. A mapping system includes at least two such mobile robots, with the communication component of each robot adapted to send and receive image data and/or map data to the other robot. A method involves operating such a mobile robot in an area of interest in which deliveries are to be made.
Speed control system for an autonomous mobile device
An autonomous mobile device (AMD) may perform tasks within a physical space. The AMD may move over ramps, bumps, or navigate around obstacles. The AMD may have an inertial measurement unit (IMU) and distance sensors. The IMU provides tilt information indicative of the AMD being on a flat surface or a ramp. The distance sensors provide information on distances between the AMD and surrounding obstacles. Using IMU measurements, the AMD determines a first speed limit that is safe given the tilt of the AMD. Using the distance sensors, the AMD determines a second speed limit that is safe given a distance to an obstacle. The AMD determines a maximum speed based on the first and second speed limits. Based on the maximum speed, the AMD determines whether to adjust a current speed and by how much.
Speed control system for an autonomous mobile device
An autonomous mobile device (AMD) may perform tasks within a physical space. The AMD may move over ramps, bumps, or navigate around obstacles. The AMD may have an inertial measurement unit (IMU) and distance sensors. The IMU provides tilt information indicative of the AMD being on a flat surface or a ramp. The distance sensors provide information on distances between the AMD and surrounding obstacles. Using IMU measurements, the AMD determines a first speed limit that is safe given the tilt of the AMD. Using the distance sensors, the AMD determines a second speed limit that is safe given a distance to an obstacle. The AMD determines a maximum speed based on the first and second speed limits. Based on the maximum speed, the AMD determines whether to adjust a current speed and by how much.
Skydiving Robots which precisely land and deliver Payloads
Device, system, and method for Skydiving Robots? which can skydive using customized or off-the-shelf parachutes and deliver civilian or military payloads. The Skydiving Robots can freefall, open the parachute and steer toward the target, carry payloads, operate in the daytime or the pitch black at night using GPS guidance to land precisely. If they exited the plane at up to or over 30,000 feet above ground level (AGL) the final target could be miles away. They are the ideal reconnaissance scouts with a wide array of sensors such as cameras. They can carry payloads and precisely land within a few feet of a target.
Skydiving Robots which precisely land and deliver Payloads
Device, system, and method for Skydiving Robots? which can skydive using customized or off-the-shelf parachutes and deliver civilian or military payloads. The Skydiving Robots can freefall, open the parachute and steer toward the target, carry payloads, operate in the daytime or the pitch black at night using GPS guidance to land precisely. If they exited the plane at up to or over 30,000 feet above ground level (AGL) the final target could be miles away. They are the ideal reconnaissance scouts with a wide array of sensors such as cameras. They can carry payloads and precisely land within a few feet of a target.
SAFETY PROCEDURE ANALYSIS FOR OBSTACLE AVOIDANCE IN AUTONOMOUS VEHICLES
In various examples, a current claimed set of points representative of a volume in an environment occupied by a vehicle at a time may be determined. A vehicle-occupied trajectory and at least one object-occupied trajectory may be generated at the time. An intersection between the vehicle-occupied trajectory and an object-occupied trajectory may be determined based at least in part on comparing the vehicle-occupied trajectory to the object-occupied trajectory. Based on the intersection, the vehicle may then execute the first safety procedure or an alternative procedure that, when implemented by the vehicle when the object implements the second safety procedure, is determined to have a lesser likelihood of incurring a collision between the vehicle and the object than the first safety procedure.
Multi-sensor-fusion-based autonomous mobile robot indoor and outdoor positioning method and robot
The present application relates to a multi-sensor-fusion-based autonomous mobile robot indoor and outdoor positioning method and a robot. The method includes: acquiring GPS information and three-dimensional point cloud data of a robot at a current position; acquiring, a two-dimensional map corresponding to the GPS information of the robot at the current position; projecting the three-dimensional point cloud data of the robot at the current position onto a road surface where the robot is currently moving, to obtain two-dimensional point cloud data of the robot at the current position; and matching the two-dimensional point cloud data of the robot at the current position with the two-dimensional map corresponding to the GPS information of the robot at the current position, to determine the current position of the robot.
System and method for positioning a marine vessel
A marine vessel control system comprises a propulsion unit and a steering actuator for steering the propulsion unit. There is a shift actuator for shifting gears in the propulsion unit and a throttle actuator for increasing or decreasing throttle to the propulsion unit. There is an input device for providing user inputted steering commands to the steering actuator and for providing user inputted shift and throttle commands to the shift actuator and the throttle actuator. There is a sensor for detecting a global position and a heading direction of the marine vessel. A controller receives position and heading values of the marine vessel from the sensor. The controller compares the received position value to a pre-programmed position value to determine a position error difference. The controller also compares the received heading value to a pre-programmed heading value to determine a heading error difference.
SYSTEM AND METHOD OF RELATIVE NAVIGATION IN A NETWORK OF MOBILE VEHICLES
A self-contained, high precision navigation method and system for a mobile vehicle includes an active coherent imaging sensor array with multiple receivers that observes the surrounding environment and a digital processing component that processes the received signals to form interferometric images and determine the precise three-dimensional location and three-dimensional orientation of the vehicle within that environment. A mesh navigation system for a network of mobile vehicles is provided where each mobile vehicle hosts an active coherent imaging sensor that observes a common area in the environment that surrounds the network of mobile vehicles. The navigation system on each mobile vehicle receives signals from the other mobile vehicles reflected from the common area in the environment. These signals are processed onboard each mobile vehicle to form interferometric images and determine the precise three-dimensional location of each mobile vehicle relative to the others operating and moving within the network.