G05D1/0251

Method and apparatus for de-biasing the detection and labeling of objects of interest in an environment
11579625 · 2023-02-14 · ·

Described herein are methods of generating learning data to facilitate de-biasing the labeled location of an object of interest within an image. Methods may include: receiving sensor data, where the sensor data is a first image; determining reference corner locations of an object in the first image using image processing; generating observed corner locations of the object in the first image from the determined reference corner locations; generating a bias transformation based, at least in part, on a difference between the reference corner locations and the observed corner locations of the object in the first image; receiving sensor data from another image sensor of a second image; receiving observed corner locations of an object in the second image from a user; and applying the bias transformation to the observed corner locations of the object in the second image to generate de-biased corners for the object in the second image.

Localization method and system for mobile remote inspection and/or manipulation tools in confined spaces

A localization method and system for mobile remote inspection and/or manipulation tools in confined spaces are provided. The system comprises a mobile remote inspection and/or manipulation device including a carrier movable within the confined space and an inspection and/or manipulation tool, such as an inspection camera, pose sensors arranged on the movable carrier for providing signals indicative of the position and orientation of the movable carrier, and distance sensors arranged on the movable carrier for providing signals indicative of the distance to interior surfaces of the confined space. The localization method makes use of probalistic sensor fusion of the measurement data provided by the pose sensors and the distance sensors in order to precisely determine the actual pose of the movable carrier and localize data generated by the inspection and/or manipulation tool.

Autonomous mobile apparatus and control method thereof

The present disclosure provides an autonomous mobile apparatus and a control method thereof. The method includes: starting a SLAM mode; obtaining first image data captured by a first camera; extracting a first tag image of positioning tag(s) from the first image data; calculating a three-dimensional camera coordinate of feature points of the positioning tag(s) in a first camera coordinate system of the first camera based on the first tag image; calculating a three-dimensional world coordinate of the feature points of the positioning tag(s) in a world coordinate system based on a first camera pose of the first camera when obtaining the first image data in the world coordinate system and the three-dimensional camera coordinate; and generating a map file based on the three-dimensional world coordinate of the feature points of the positioning tag(s).

Mobile robot system and method for generating map data using straight lines extracted from visual images

A mobile robot is configured to navigate on a sidewalk and deliver a delivery to a predetermined location. The robot has a body and an enclosed space within the body for storing the delivery during transit. At least two cameras are mounted on the robot body and are adapted to take visual images of an operating area. A processing component is adapted to extract straight lines from the visual images taken by the cameras and generate map data based at least partially on the images. A communication component is adapted to send and receive image and/or map data. A mapping system includes at least two such mobile robots, with the communication component of each robot adapted to send and receive image data and/or map data to the other robot. A method involves operating such a mobile robot in an area of interest in which deliveries are to be made.

Terrain trafficability assessment for autonomous or semi-autonomous rover or vehicle

A rover or semi-autonomous or autonomous vehicle may use an image classifier to determine a terrain class of regions of an image of the terrain ahead of the rover or vehicle. The regions of the images are used to estimate the slope of the terrain for the different regions. The terrain class and slope are used to predict an amount of slip the rover will experience when traversing the terrain of the different regions. A heuristic mapping for the terrain class may be applied to the predicted slip amount to determine a hazard level for the rover or vehicle traversing the terrain.

Method and system for distributed learning and adaptation in autonomous driving vehicles

The present teaching relates to system, method, medium for in-situ perception in an autonomous driving vehicle. A plurality of types of sensor data acquired continuously by a plurality of types of sensors deployed on the vehicle are first received, where the plurality of types of sensor data provide information about surrounding of the vehicle. Based on at least one model, one or more items are tracked from a first of the plurality of types of sensor data acquired by one or more of a first type of the plurality of types of sensors, wherein the one or more items appear in the surrounding of the vehicle. At least some of the one or more items are then automatically labeled on-the-fly via either cross modality validation or cross temporal validation of the one or more items and are used to locally adapt, on-the-fly, the at least one model in the vehicle.

SYSTEMS AND METHODS FOR PROJECTING A THREE-DIMENSIONAL (3D) SURFACE TO A TWO-DIMENSIONAL (2D) SURFACE FOR USE IN AUTONOMOUS DRIVING
20230236603 · 2023-07-27 ·

Systems and methods for projecting a three-dimensional (3D) surface to a two-dimensional (2D) surface for use in autonomous driving are disclosed. In one aspect, a control system for an autonomous vehicle includes a processor and a computer-readable memory in communication with the processor and having stored thereon computer-executable instructions to cause the processor to: receive a 3D map including a plurality of objects, determine a base point in the 3D map, shift the objects in the 3D map based on the base point, project the objects in the shifted 3D map to a 2D map, and output the 2D map.

Apparatus and Method for Controlling Mobile Body
20230004169 · 2023-01-05 ·

An apparatus and the like for controlling a mobile body that are capable of adjusting a detection result by a radar device in accordance with a three-dimensional shape for each region of a three-dimensional map generated from an image captured by an image-capturing device are provided. A mobile body control unit 105 is an apparatus for controlling the vehicle (mobile body) including an image-capturing device 101 and a millimeter wave radar device 102 (radar device). A three-dimensional map generation unit 203 generates a three-dimensional map around the vehicle from an image captured by the image-capturing device 101. A radar weight map estimation unit 204 (weight estimation unit) estimates the weight of the detection result by the millimeter wave radar device 102 for each region of the three-dimensional map from the three-dimensional shape for each region of the three-dimensional map. A weight adjustment unit 205 (adjustment unit) adjusts a detection result by the millimeter wave radar device 102 on the basis of a weight.

Systems and methods for transfer of material using autonomous machines with reinforcement learning and visual servo control

Systems and methods enable an autonomous vehicle to perform an iterative task of transferring material from a source location to a destination location, such as moving dirt from a pile, in a more efficient manner, using a combination of reinforcement learning techniques to select a motion path for a particular iteration and visual servo control to guide the motion of the vehicle along the selected path. Lifting, carrying, and depositing of material by the autonomous vehicle can also be managed using similar techniques.

System and Method for Dimensioning Target Objects
20230025659 · 2023-01-26 ·

A method comprising obtaining, from a sensor, depth data representing a target object; selecting a model to fit to the depth data; for each data point in the depth data: defining a ray from a location of the sensor to the data point; and determining an error based on a distance from the data point to the model along the ray; when the depth data does not meet a similarity threshold for the model based on the determined errors, selecting a new model and repeating the error determination for the depth data based on the new model; when the depth data meets the similarity threshold for the model, selecting the model as representing the target object; and outputting the selected model representing the target object.