G05D1/0274

Using mapped elevation to determine navigational parameters

Systems and methods for navigating a host vehicle. The system may perform operations including receiving, from an image capture device, at least one image representative of an environment of the host vehicle; analyzing the at least one image to identify an object in the environment of the host vehicle; determining a location of the host vehicle; receiving map information associated with the determined location of the host vehicle, wherein the map information includes elevation information associated with the environment of the host vehicle; determining a distance from the host vehicle to the object based on at least the elevation information; and determining a navigational action for the host vehicle based on the determined distance.

Remote control apparatus, system, method, and program
11579615 · 2023-02-14 · ·

A remote control apparatus performs: calculating a path and a moving speed to reach a desired destination from a current position of the control target apparatus; measuring a communication delay time between the remote control apparatus and the control target apparatus; estimating an overshoot region based on the communication delay time, a stored size of the control target apparatus, and the moving speed; predicting whether the control target apparatus will contact with a peripheral object(s), based on the path, the overshoot region, and stored peripheral object information of the control target apparatus; calculating the moving speed information to be given to the control target apparatus so that a moving direction of the control target apparatus changes by a predetermined value or more when predicted that the control target apparatus will contact with a peripheral object(s); and transmitting a control signal including the moving speed information to the control target apparatus.

Autonomous mobile apparatus and control method thereof

The present disclosure provides an autonomous mobile apparatus and a control method thereof. The method includes: starting a SLAM mode; obtaining first image data captured by a first camera; extracting a first tag image of positioning tag(s) from the first image data; calculating a three-dimensional camera coordinate of feature points of the positioning tag(s) in a first camera coordinate system of the first camera based on the first tag image; calculating a three-dimensional world coordinate of the feature points of the positioning tag(s) in a world coordinate system based on a first camera pose of the first camera when obtaining the first image data in the world coordinate system and the three-dimensional camera coordinate; and generating a map file based on the three-dimensional world coordinate of the feature points of the positioning tag(s).

Mobile robot system and method for generating map data using straight lines extracted from visual images

A mobile robot is configured to navigate on a sidewalk and deliver a delivery to a predetermined location. The robot has a body and an enclosed space within the body for storing the delivery during transit. At least two cameras are mounted on the robot body and are adapted to take visual images of an operating area. A processing component is adapted to extract straight lines from the visual images taken by the cameras and generate map data based at least partially on the images. A communication component is adapted to send and receive image and/or map data. A mapping system includes at least two such mobile robots, with the communication component of each robot adapted to send and receive image data and/or map data to the other robot. A method involves operating such a mobile robot in an area of interest in which deliveries are to be made.

Method for localizing a vehicle

A method for localizing a vehicle comprises transmitting first position data related to a first position of the vehicle at a first point in time from the vehicle to a server. The server computes second position data related to the first position of the vehicle at the first point in time based on the received first position data. The server transmits the second position data from the server to the vehicle. The vehicle computes third position data related to a second position of the vehicle at a second point in time based on the received second position data. The second point in time is later than the first point in time.

OWN-POSITION ESTIMATING DEVICE, MOVING BODY, OWN-POSITION ESTIMATING METHOD, AND OWN-POSITION ESTIMATING PROGRAM

An own-position estimating device for estimating an own-position of a moving body by matching a feature extracted from an acquired image with a database in which position information and the feature are associated with each other in advance, includes an estimating unit estimating the own-position of the moving body by matching the feature extracted by the extracting unit with the database, and a determination threshold value adjusting unit adjusting a determination threshold value for extracting the feature, in which the determination threshold value adjusting unit acquires the database in a state in which the determination threshold value is adjusted, and adjusts the determination threshold value on the basis of the determination threshold value linked to each of the position information items in the database, and the extracting unit extracts the feature from the image by using the determination threshold value adjusted by the determination threshold value adjusting unit.

SITUATIONAL AWARENESS ROBOT

A system and methods for assessing an environment are disclosed. A method includes causing a robot to transmit data to first and second user devices, causing the robot to execute a first action, and, responsive to a second instruction, causing the robot to execute a second action. At least one user device is outside the environment of the robot. At least one action includes recording a video of at least a portion of the environment, displaying the video in real time on both user devices, and storing the video on a cloud-based network. The other action includes determining a first physical location of the robot, determining a desired second physical location of the robot, and propelling the robot from the first location to the second location. Determining the desired second location is responsive to detecting a touch on a touchscreen video feed displaying the video in real time.

INFORMATION PROCESSING APPARATUS, MOVING BODY, METHOD FOR CONTROLLING INFORMATION PROCESSING APPARATUS, AND RECORDING MEDIUM
20230039203 · 2023-02-09 ·

An information processing apparatus includes: a shape information acquiring unit 204 configured to acquire shape information of a surrounding environment of a moving body measured by a sensor mounted in the moving body; a position and posture acquiring unit configured to acquire position and posture information of the sensor; a correction state acquiring unit configured to acquire a performance state relating to a process of correcting the position and posture information; a priority level determining unit configured to determine a priority level of an area for generating a map; and a map generating unit configured to generate the map on the basis of the shape information and the position and posture information acquired at the time of acquisition of the shape information, in which the map generating unit generates the map in order from an area of which the priority level is high in accordance with the performance state.

Adaptive Perimeter Intrusion Detection for Mobile Automation Apparatus
20230043172 · 2023-02-09 ·

A method includes: selecting first control parameters for a perimeter intrusion detector of a mobile automation apparatus; controlling the perimeter intrusion detector according to the first control parameters, to monitor a first perimeter surrounding the mobile automation apparatus; determining that navigational data of the mobile automation apparatus defines a maneuver satisfying perimeter modification criteria; in response to determining that a likelihood of intrusion of the first perimeter associated with the maneuver exceeds a threshold, selecting second control parameters for the perimeter intrusion detector; modifying the first perimeter to a second perimeter according to the second control parameters; and controlling the perimeter intrusion detector to monitor the second perimeter.

Automatic wall climbing type radar photoelectric robot system for non-destructive inspection and diagnosis of damages of bridge and tunnel structure

An automatic wall climbing type radar photoelectric robot system for damages of a bridge and tunnel structure, mainly including a control terminal, a wall climbing robot and a server. The wall climbing robot generates a reverse thrust by rotor systems, moves flexibly against the surface of a rough bridge and tunnel structure by adopting an omnidirectional wheel technology, and during inspection by the wall climbing robot, bridges and tunnels do not need to be closed, and the traffic is not affected. Bridges and tunnels can divide into different working regions only by arranging a plurality of UWB base stations, charging and data receiving devices on the bridge and tunnel structure by means of UWB localization, laser SLAM and IMU navigation technologies, a plurality of wall climbing robots supported to work at the same time, automatic path planning and automatic obstacle avoidance realized, and unattended regular automatic patrolling can be realized.