Patent classifications
G05B2219/40442
System and method for determining grasping positions for two-handed grasps of industrial objects
A system and method is provided for determining grasping positions for two-handed grasps of industrial objects. The system may include a processor configured to determine a three dimensional (3D) voxel grid for a 3D model of a target object. In addition, the processor may be configured to determine at least one pair of spaced apart grasping positions on the target object at which the target object is capable of being grasped with two hands at the same time based on processing the 3D voxel grid for the target object with a neural network trained to determine grasping positions for two-handed grasps of target objects using training data. Such training data may include 3D voxel grids of a plurality of 3D models of training objects and grasping data including corresponding pairs of spaced-apart grasping positions for two-handed grasps of the training objects. Also, the processor may be configured to provide output data that specifies the determined grasping positions on the target object for two-handed grasps.
MOTION PLANNING OF A ROBOT STORING A DISCRETIZED ENVIRONMENT ON ONE OR MORE PROCESSORS AND IMPROVED OPERATION OF SAME
A robot control system determines which of a number of discretizations to use to generate discretized representations of robot swept volumes and to generate discretized representations of the environment in which the robot will operate. Obstacle voxels (or boxes) representing the environment and obstacles therein are streamed into the processor and stored in on-chip environment memory. At runtime, the robot control system may dynamically switch between multiple motion planning graphs stored in off-chip or on-chip memory. The dynamically switching between multiple motion planning graphs at runtime enables the robot to perform motion planning at a relatively low cost as characteristics of the robot itself change.
Setup planning and parameter selection for robotic finishing
Methods, systems, and platforms for automatic setup planning for a robot. The method includes sampling multiple poses in multiple dimensions within a robotic workspace. The method includes generating one or more candidate configurations based on the multiple poses. The method includes determining a score for each candidate configuration of the one or more candidate configurations. The score represents area coverage of a region of interest and at least one of an amount of setup time of the candidate configuration or an amount of energy used. The method includes determining a set of candidate configurations that has an overall area coverage that covers the region of interest based on the score for each candidate configuration. The method includes controlling a position and an orientation of the object based on the set of candidate configurations.
System and method for trajectory planning for manipulators in robotic finishing applications
Methods, systems, and apparatus for automatically moving a tool attached to a robotic manipulator from a start position to a goal position. The method includes determining, using a processor, a plurality of next possible positions from the start position. The method includes selecting a second position from the plurality of next possible positions based on respective costs associated with moving the tool from the start position to each of the possible positions in the plurality of next possible positions. The method includes moving, using a plurality of actuators, the tool to the second position. The method includes determining an updated plurality of next possible positions, selecting a next position, and moving the tool to the next position until the goal position is reached.
Determining a Virtual Representation of an Environment By Projecting Texture Patterns
Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.
Control apparatus, robot system, and method of detecting object
A control apparatus includes a processor that executes a first point cloud generation process including a first imaging process of acquiring a first image according to a first depth measuring method and a first analysis process of generating a first point cloud and a second point cloud generation process including a second imaging process of acquiring a second image according to a second depth measuring method and a second analysis process of generating a second point cloud, and detects the object using the first point cloud or the second point cloud. The first point cloud generation process completes in a shorter time than the second point cloud generation process, and the processor starts the second point cloud generation process after the first imaging process and discontinues the second point cloud generation process if the first point cloud satisfies a predetermined condition of success.
METHOD AND SYSTEM FOR OBSTACLE AVOIDANCE IN ROBOT PATH PLANNING USING DEPTH SENSORS
The present teaching relates to method, system, medium, and implementations for robot path planning. Depth data of obstacles, acquired by depth sensors deployed in a 3D robot workspace and represented with respect to a sensor coordinate system, is transformed into depth data with respect to a robot coordinate system. The 3D robot workspace is discretized to generate 3D grid points representing a discretized 3D robot workspace. Based on the depth data with respect to the robot coordinate system, binarized values are assigned to at least some of 3D grid points to generate a binarized representation for the obstacles present in the 3D robot workspace. With respect to one or more sensing points associated with a part of a robot, it is determined whether the part is to collide with any obstacle. Based on the determining, a path is planned for the robot to move along while avoiding any obstacle.
Determining a virtual representation of an environment by projecting texture patterns
Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.
Fast motion planning collision detection
Techniques described herein include a system and methods for implementing fast motion planning of collision detection. In some embodiments, an area voxel map is generated with respect to a three-dimensional space within which a repositioning event is to occur. A number of movement voxel maps are then identified as being related to potential repositioning options. The area voxel map is then compared to each of the movement voxel maps to identify collisions that may occur with respect to the repositioning options. In some embodiments, each voxel map includes a number of bits which each represent voxels in a volume of space. The comparison between the area voxel map and each of the movement voxel maps may include a logical conjunction (e.g., an AND operation). Movement voxel maps for which the comparisons result includes a value of 1 are then removed from a set of valid repositioning options.
3D-2D vision system for robotic carton unloading
Robotic carton loader or unloader incorporates three-dimensional (3D) and two-dimensional (2D) sensors to detect respectively a 3D point cloud and a 2D image of a carton pile within transportation carrier such as a truck trailer or shipping container. Edge detection is performed using the 3D point cloud, discarding segments that are two small to be part of a product such as a carton. Segments that are too large to correspond to a carton are 2D image processed to detect additional edges. Results from 3D and 2D edge detection are converted in a calibrated 3D space of the material carton loader or unloader to perform one of loading or unloading of the transportation carrier. Image processing can also detect jamming of products sequence from individually controllable zones of a conveyor of the robotic carton loader or unloader for singulated unloading.