Patent classifications
G05D1/2462
Robot positioning method and apparatus, intelligent robot, and storage medium
Provided are a robot positioning method and apparatus, an intelligent robot, and a storage medium. The method includes: configuring a camera and various sensors on a robot so that the robot may acquire an image collected by the camera and various sensing data collected by the various sensors (step 101); next extracting semantic information contained in the collected image (step 102) and identifying, according to the semantic information, a scenario where the robot is currently identified (step 103); finally, determining a current position of the robot according to target sensing data corresponding to the scenario where the robot is located (step 104). In the method, the sensing data used during determining the pose of the robot is not all the sensing data, but is the target sensing data corresponding to the scenario. Therefore, the basis for determining the pose is more targeted, thus further improving the accuracy of the pose.
SYSTEM AND METHOD FOR DIRECTING ROBOT PICKING ACTIVITY IN A WAREHOUSE ENVIRONMENT
A system and method are described that provide for directing robot picking activity in a warehouse environment. In one example of the system/method of the present invention, multiple robots are directed by one or more central processors to move to resource locations based on resource retrieval instructions. Once a robot is at or near a resource location (e.g., by a storage rock with an item on it), the resource may be obtained by the robot and/or placed (e.g., by a picker) on a platform linked to and controlled by the robot. The robot may then be directed to transport the resource to an outbound location (e.g., a loading dock). An assignment algorithm may be applied by the one or more processors to regulate movement of a robot according to a calculated arrival time of the robot at a second location.
SYSTEMS AND METHODS FOR DRONE NAVIGATION
Systems and methods are described herein that facilitate the navigation of drones, including autonomous and semi-autonomous drones. These systems and methods particularly applicable to the facilitation of drones in underserved environments. For example, the systems and methods can facilitate the navigation of drones using a spatial obstruction database. In these embodiments the spatial obstruction database can abstract obstructions in a variety of ways, including as defined subregions. As other examples, the systems and methods can facilitate the navigation of drones using techniques for determining navigation paths. As other examples, the systems and methods can facilitate the navigation of drones using techniques for evaluating navigation paths to determine if line-of-sight path segments are open for navigation. As will be described below, these systems and methods are particularly applicable to the navigation of autonomous and semi-autonomous drones that may have limited processing and memory capabilities.
METHOD AND DEVICE FOR DETERMINING POSITIONS OF NAVIGATION TARGET POINTS AND COMPUTER-READABLE STORAGE MEDIUM
A method for determining positions of navigation target points includes: acquiring an initial map, wherein the initial map represents a plan view corresponding to an explored area; performing contour extraction processing on the explored area to obtain a first contour map corresponding to the explored area, wherein the first contour map includes one or more connecting sections, and the one or more connecting sections are between adjacent ones of a number of sub-areas in the explored area, respectively; performing dilation processing on the first contour map to obtain a second contour map to remove the connecting sections; based on the initial map and the second contour map, extracting at least one navigation target point area from the explored area; and determining the positions of the navigation target points based on connected components corresponding to the at least one navigation target point area.
USING SIMULATED ENVIRONMENTS TO IMPROVE AUTONOMOUS ROBOT OPERATION IN REAL ENVIRONMENTS
Disclosed are apparatuses, systems, and techniques that train and use trained language models to assist users with complex systems installation, troubleshooting, and/or maintenance. A method can include generating, for a real environment including a real robot having one or more real sensors, a simulated environment modeling the real environment, the simulated environment including a simulated robot corresponding to the real robot, the simulated robot including one or more simulated sensors corresponding to the one or more real sensors, obtaining simulated data based at least on simulated sensor data collected using the one or more simulated sensors, and using the simulated data to control operation of the real robot within the real environment.
Ship docking assistance device
A ship docking assistance device includes a position azimuth information acquisition unit; a LIDAR; a map generation updating unit; a high-point acquisition unit; and a position azimuth estimation unit. The LIDAR acquires point-group data three-dimensionally indicating the environment around a ship. The map generation updating unit generates a map around the ship based on the point-group data. The high-point acquisition unit acquires, from within the point-group data, a high point having a prescribed height or more. The position azimuth estimation unit estimates the position and the azimuth of the ship through matching between the position of the high point acquired by the high-point acquisition unit and the position of the high point in the map. The map generation updating unit updates the map by placing the point-group data in the map using, as references, the position and the azimuth of the ship estimated by the position azimuth estimation unit.
Multi-resolution top-down segmentation
Techniques for segmenting sensor data are discussed herein. Data can be represented in individual levels in a multi-resolution voxel space. A first level can correspond to a first region of an environment and a second level can correspond to a second region of an environment that is a subset of the first region. In some examples, the levels can comprise a same number of voxels, such that the first level covers a large, low-resolution region, while the second level covers a smaller, higher-resolution region, though more levels are contemplated. Operations may include analyzing sensor data represented in the voxel space from a perspective, such as a top-down perspective. From this perspective, techniques may generate masks that represent objects in the voxel space. Additionally, techniques may generate segmentation data to verify and/or generate the masks, or otherwise cluster the sensor data.
System and method for mapping features of a warehouse environment having improved workflow
A system and method are described that provide for mapping features of a warehouse environment having improved workflow. In one example of the system/method of the present invention, a mapping robot is navigated through a warehouse environment, and sensors of the mapping robot collect geospatial data as part of a mapping mode. A Frontend N block of a map framework may be responsible for reading and processing the geospatial data from the sensors of the mapping robot, as well as various other functions. The data may be stored in a keyframe object at a keyframe database. A Backend block of the map framework may be useful for detecting loop constraints, building submaps, optimizing a pose graph using keyframe data from one or more trajectory blocks, and/or various other functions.
USING A QUAD-TREE SPATIAL INDEX TO IDENTIFY MAP DATA FOR AUTONOMOUS SYSTEMS AND APPLICATIONS
In various examples, embodiments are directed to identifying map data (e.g., relevant to a route) using a quad-tree spatial index. In this regard, spatial map data that indicates various map features is represented in a quad-tree spatial index for use in identifying map data. To identify map data, bounding shapes may be generated in association with various segments of a route. An indication of an object-oriented bounding shape may be used to query the quad-tree spatial index to identify map data related to the object-oriented bounding shape. In embodiments, an object-oriented spatial index may be generated that indexes the object-oriented bounding shapes associated with the route. The object-oriented spatial index may be used to query the quad-tree spatial index to identify map data related to the corresponding object-oriented bounding shapes. Alternatively, the quad-tree spatial index may be used to query the object-oriented spatial index to identify map data.
PATH PERCEPTION USING TEMPORAL MODELING FOR AUTONOMOUS SYSTEMS AND APPLICATIONS
In various examples, to improve path perception in machine learning implementations, a temporal model includes a backbone model trained to predict one or more path perception outputs, such as, path geometry, path class, path uncertainty and/or other path attributes, for a current input frame. To create temporal context, the temporal model enables the backbone model to separately operate (in parallel or otherwise) on a set of frames that are temporally related to the current input frame. The outputs of the separate executions of the backbone model are then concatenated and processed via one or more convolution operations to generate a set of features that will be fed to the final output layer of the pipeline that encapsulates one or more path perception outputs that are generated based on temporal context.