Patent classifications
G05B2219/40532
Positioning a robot sensor for object classification
In one embodiment, a method includes receiving, from a first sensor on a robot, first sensor data indicative of an environment of the robot. The method also includes identifying, based on the first sensor data, an object of an object type in the environment of the robot, where the object type is associated with a classifier that takes sensor data from a predetermined pose relative to the object as input. The method further includes causing the robot to position a second sensor on the robot at the predetermined pose relative to the object. The method additionally includes receiving, from the second sensor, second sensor data indicative of the object while the second sensor is positioned at the predetermined pose relative to the object. The method further includes determining, by inputting the second sensor data into the classifier, a property of the object.
ROBOT CONTROL DEVICE, AND METHOD AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR CONTROLLING THE SAME
This invention provides a robot control device for controlling a robot configured to perform a predetermined operation, where the robot control device comprises an acquisition unit configured to acquire a plurality of images captured by a plurality of image capturing devices including a first image capturing device and a second image capturing device different from the first image capturing device; and a specification unit configured to use the plurality of captured images acquired by the acquisition unit as inputs to a neural network, and configured to specify a control instruction for the robot based on an output as a result from the neural network.
POSE DETECTION OF OBJECTS FROM IMAGE DATA
Object pose may be detected by obtaining a computer model of a physical object, simulating the computer model in a realistic environment simulator, capturing training data including a plurality of pose representations, each pose representation including an image of the computer model in one of a plurality of poses paired with a label including a pose specification of the computer model as shown in the image, the image of the computer model and the pose specification defined by the simulator, and applying a learning process to the pose representations to produce a pose determining function for relating an image of the object to a pose specification.
SHADING TOPOGRAPHY IMAGING FOR ROBOTIC UNLOADING
Vision systems for robotic assemblies for handling cargo, for example, unloading cargo from a trailer, can determine the position of cargo based on shading topography. Shading topography imaging can be performed by using light sources arranged at different positions relative to the image capture device(s).
Generating and utilizing spatial affordances for an object in robotics applications
Methods, apparatus, systems, and computer-readable media are provided for generating spatial affordances for an object, in an environment of a robot, and utilizing the generated spatial affordances in one or more robotics applications directed to the object. Various implementations relate to applying vision data as input to a trained machine learning model, processing the vision data using the trained machine learning model to generate output defining one or more spatial affordances for an object captured by the vision data, and controlling one or more actuators of a robot based on the generated output. Various implementations additionally or alternatively relate to training such a machine learning model.
Automated Manipulation Of Transparent Vessels
An actuator and end effector are controlled according to images from cameras having a surface in their field of view. Vessels (cups, bowls, etc.) and other objects are identified in the images and their configuration is assigned to a finite set of categories by a classifier that does not output a 3D bounding box or determine a 6D pose. For objects assigned to a first subset of categories, grasping parameters for controlling the actuator and end effector are determined using only 2D bounding boxes, such as oriented 2D bounding boxes. For objects not assigned to the first subset, a righting operation may be performed using only 2D bounding boxes. Objects that are still not in the first set may then be grasped by estimating a 3D bounding box and 6D pose.
ROBOTIC LAUNDRY SORTING DEVICES, SYSTEMS, AND METHODS OF USE
Devices, systems, and methods for autonomously separating and sorting a plurality of individual articles from a pile of laundry articles into two or more sorted loads for washing are described. For example, an autonomous sorting and separating system includes a stationary surface configured to receive thereon at a first location the pile of laundry articles. A plurality of actuatable grippers are disposed at spaced apart positions adjacent the stationary surface and comprise a first actuatable gripper configured to grasp, hoist, and deposit at a second location at least one of the plurality of individual articles within reach of a second actuatable gripper. A terminal gripper comprising at least one of the second actuatable gripper and another actuatable gripper is configured to release an individual article into one of the two or more sorted loads. At least one controller is in operable communication with the grippers.
Artificial intelligence system for modeling and evaluating robotic success at task performance
A machine learning system builds and uses computer models for identifying how to evaluate the level of success reflected in a recorded observation of a task. Such computer models may be used to generate a policy for controlling a robotic system performing the task. The computer models can also be used to evaluate robotic task performance and provide feedback for recalibrating the robotic control policy.
Artificial intelligence system for modeling and evaluating robotic success at task performance
A machine learning system builds and uses computer models for identifying how to evaluate the level of success reflected in a recorded observation of a task. Such computer models may be used to generate a policy for controlling a robotic system performing the task. The computer models can also be used to evaluate robotic task performance and provide feedback for recalibrating the robotic control policy.
Sensorized Robotic Gripping Device
A robotic gripping device is provided. The robotic gripping device includes a palm and a plurality of digits coupled to the palm. The robotic gripping device also includes a time-of-flight sensor arranged on the palm such that the time-of-flight sensor is configured to generate time-of-flight distance data in a direction between the plurality of digits. The robotic gripping device additionally includes an infrared camera, including an infrared illumination source, where the infrared camera is arranged on the palm such that the infrared camera is configured to generate grayscale image data in the direction between the plurality of digits.