Patent classifications
G05D1/0251
OBSTACLE TO PATH ASSIGNMENT FOR AUTONOMOUS SYSTEMS AND APPLICATIONS
In various examples, one or more output channels of a deep neural network (DNN) may be used to determine assignments of obstacles to paths. To increase the accuracy of the DNN, the input to the DNN may include an input image, one or more representations of path locations, and/or one or more representations of obstacle locations. The system may thus repurpose previously computed information—e.g., obstacle locations, path locations, etc.—from other operations of the system, and use them to generate more detailed inputs for the DNN to increase accuracy of the obstacle to path assignments. Once the output channels are computed using the DNN, computed bounding shapes for the objects may be compared to the outputs to determine the path assignments for each object.
PATHFINDING USING CENTERLINE HEURISTICS FOR AN AUTONOMOUS MOBILE ROBOT
To load and unload a trailer, an autonomous mobile robot determines its location and the location of objects within the trailer relative to the trailer itself, rather than relative to a warehouse. The autonomous mobile robot determines its location the location of objects within the trailer relative to the trailer. The autonomous mobile robot navigates within the trailer and manipulates objects within the trailer from the trailer's reference frame. Additionally, the autonomous mobile robot uses a centerline heuristic to compute a path for itself within the trailer. A centerline heuristic evaluates nodes within the trailer based on how far away those nodes are from the centerline. If the nodes are further away from the centerline, they are assigned a higher cost. Thus, when the autonomous mobile robot computes a path, the path is more likely to stay near the centerline of the trailer rather than get closer to the sides.
Artificial intelligence apparatus for sharing information of stuck area and method for the same
An AI apparatus and an operating method are provided, the AI apparatus includes a communication interface to receive 3D sensor data and bumper sensor data from a first cleaner, a processor to generate surrounding situation map data based on the 3D sensor data and the bumper sensor data, and a learning processor to generate learning data by labeling area classification data for representing whether the surrounding situation map data corresponds to the stuck area, and to train a stuck area classification model based on the learning data. The processor transmits the trained stuck area classification model to a second cleaner through the communication interface.
Robot, system and method detecting and/or responding to transitions in height
The present invention relates to a method of a robot (30) responding to a transition (40) between height levels, the method comprising a robot (30) travelling, sensing information of the surroundings of the robot (30) and generating a three dimensional data set corresponding to a heightmap of the surroundings of the robot (30) based on the information, detecting a transition (40) between different height levels (10, 20) in the three dimensional data set, categorizing the transition (40) between the different height levels (10, 20) by means of at least one characteristic, the robot (30) performing a response action, which response action depends on the categorization of the transition (40). The present invention also relates to a corresponding system.
Multi-domain neighborhood embedding and weighting of sampled data
This document describes “Multi-domain Neighborhood Embedding and Weighting” (MNEW) for use in processing point cloud data, including sparsely populated data obtained from a lidar, a camera, a radar, or combination thereof. MNEW is a process based on a dilation architecture that captures pointwise and global features of the point cloud data involving multi-scale local semantics adopted from a hierarchical encoder-decoder structure. Neighborhood information is embedded in both static geometric and dynamic feature domains. A geometric distance, feature similarity, and local sparsity can be computed and transformed into adaptive weighting factors that are reapplied to the point cloud data. This enables an automotive system to obtain outstanding performance with sparse and dense point cloud data. Processing point cloud data via the MNEW techniques promotes greater adoption of sensor-based autonomous driving and perception-based systems.
Method for robot repositioning
A robot repositioning method is provided. A position deviation caused by excessive accumulation of a walking error of a robot may be corrected to implement repositioning by taking a path that the robot walks along an edge of an isolated object as a reference, so that the positioning accuracy and walking efficiency of the robot during subsequent navigation and walking are improved.
Control device, control method, and mobile body
The present disclosure relates to a control device, and a control method, a program, and a mobile body that enable efficient search for surrounding information when it is in an own position indefinite state. When it is in an own position indefinite state, on the basis of an own position, obstacle position information around oneself, and information of a surface sensing possible range of a surface sensing unit including a stereo camera for determining the own position, information of a surface-sensed area of an obstacle is recorded, and a search route is planned on the basis of the information of the surface-sensed area of the obstacle. The present technology can be applied to a multi-legged robot, a flying body, and an in-vehicle system that autonomously move according to a mounted computer.
Moving robot and controlling method for the moving robot
A moving robot includes: a main body; a traveling unit configured to rotate and move the main body; a sensing unit configured to sense position information of a specific point of a front portion of a docking device; and a controller configured to, based on sensing result of the sensing unit, determine i) whether a first condition, which is preset to be satisfied when the docking device is disposed in a front of the moving robot, is satisfied, and ii) whether a second condition, which is preset to be satisfied when the moving robot is disposed in a front of the moving robot, is satisfied, to control an operation of the traveling unit so as to satisfy the first condition and the second condition, and to move to the front so as to attempt to dock in a state where the first condition and the second condition are satisfied.
Enhanced object detection for autonomous vehicles based on field view
Systems and methods for enhanced object detection for autonomous vehicles based on field of view. An example method includes obtaining an image from an image sensor of one or more image sensors positioned about a vehicle. A field of view for the image is determined, with the field of view being associated with a vanishing line. A crop portion corresponding to the field of view is generated from the image, with a remaining portion of the image being downsampled. Information associated with detected objects depicted in the image is outputted based on a convolutional neural network, with detecting objects being based on performing a forward pass through the convolutional neural network of the crop portion and the remaining portion.
OBJECT DETECTION VIA COMPARISON OF SYNCHRONIZED PULSED ILLUMINATION AND CAMERA IMAGING
An image processing system may comprise a global shutter camera, an illumination emitter, and a processing system comprising at least one processor and memory. The processing system may be configured to control the image processing system to: control the illumination emitter to illuminate a scene; control the global shutter camera to capture a sequence of images of the scene, wherein the captured sequence of images includes images that are captured without illumination of the scene by the illumination emitter and images that are captured while the scene is illuminated by the illumination emitter; and determine presence of an object in the scene based on comparison of the images captured without illumination of the scene and images captured with illumination of the scene.