Patent classifications
G05B2219/40532
Remote control system and remote control method
A remote control system includes: an imaging unit that shoots an environment in which a device to be operated including an end effector is located; a recognition unit that recognizes objects that can be grasped by the end effector based on a shot image of the environment shot by the imaging unit; an operation terminal that displays the shot image and receive handwritten input information input to the displayed shot image; and an estimation unit that, based on the objects that can be grasped and the handwritten input information input to the shot image, estimates an object to be grasped which has been requested to be grasped by the end effector from among the objects that can be grasped and estimates a way of performing a grasping motion by the end effector, the grasping motion having been requested to be performed with regard to the object to be grasped.
Positioning a Robot Sensor for Object Classification
In one embodiment, a method includes receiving, from a first sensor on a robot, first sensor data indicative of an environment of the robot. The method also includes identifying, based on the first sensor data, an object of an object type in the environment of the robot, where the object type is associated with a classifier that takes sensor data from a predetermined pose relative to the object as input. The method further includes causing the robot to position a second sensor on the robot at the predetermined pose relative to the object. The method additionally includes receiving, from the second sensor, second sensor data indicative of the object while the second sensor is positioned at the predetermined pose relative to the object. The method further includes determining, by inputting the second sensor data into the classifier, a property of the object.
ROBOT FOR DETECTING AND PICKING UP AT LEAST ONE PREDETERMINED OBJECT
A robot configured to recognize and pick up at least one predetermined object, the robot being configured in such a manner that the predetermined object is recognized and picked up in a work space below the robot. The robot may have an end effector and an adjusting unit for picking up the predetermined object. The end effector and the adjusting unit are disposed in the work space below the robot.
Target object recognition device, manipulator, and mobile robot
Provided is art capable of recognizing the states of a plurality of target objects arranged in a prescribed space region. This target object recognition device is provided with: a plurality of calculation processing units (21, 22) which each calculate the attitude state of a target object in a prescribed space region using a different technique; a state recognition unit (23) which recognizes the layout state of all of a plurality of target objects arranged in the space region; a method determination unit (24) which, in accordance with the result of the recognition by the state recognition unit (23), determines a method for using the results of the calculation performed by the calculation processing units (21, 22); and a target object recognition unit (25) which recognizes the attitude states of the target objects by means of the determined method for using the results of the calculation.
Domain adaptation using simulation to simulation transfer
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a generator neural network to adapt input images.
Automatic Robot Perception Programming by Imitation Learning
Apparatus, systems, methods, and articles of manufacture for automatic robot perception programming by imitation learning are disclosed. An example apparatus includes a percept mapper to identify a first percept and a second percept from data gathered from a demonstration of a task and an entropy encoder to calculate a first saliency of the first percept and a second saliency of the second percept. The example apparatus also includes a trajectory mapper to map a trajectory based on the first percept and the second percept, the first percept skewed based on the first saliency, the second percept skewed based on the second saliency. In addition, the example apparatus includes a probabilistic encoder to determine a plurality of variations of the trajectory and create a collection of trajectories including the trajectory and the variations of the trajectory. The example apparatus also includes an assemble network to imitate an action based on a first simulated signal from a first neural network of a first modality and a second simulated signal from a second neural network of a second modality, the action representative of a perceptual skill.
Robot Work System and Method of Controlling Robot Work System
An information processing apparatus obtains a plurality of combinations of a position of a work target candidate and a transport machine optimum control parameter which is a control parameter of the transport machine that maximizes performance of the work on a work target when the work target candidate is set as the work target, based on a captured image obtained by capturing an area including a plurality of the work target candidates transported by the transport machine, determines the work target from among the work target candidates based on the combinations, controls the transport machine based on the transport machine optimum control parameter of the determined work target, generates a control plan of the robot based on a position of the determined work target and the transport machine optimum control parameter of the work target and controls the robot according to the generated control plan.
Sensorized robotic gripping device
A robotic gripping device is provided. The robotic gripping device includes a palm and a plurality of digits coupled to the palm. The robotic gripping device also includes a time-of-flight sensor arranged on the palm such that the time-of-flight sensor is configured to generate time-of-flight distance data in a direction between the plurality of digits. The robotic gripping device additionally includes an infrared camera, including an infrared illumination source, where the infrared camera is arranged on the palm such that the infrared camera is configured to generate grayscale image data in the direction between the plurality of digits.
Generating and utilizing spatial affordances for an object in robotics applications
Methods, apparatus, systems, and computer-readable media are provided for generating spatial affordances for an object, in an environment of a robot, and utilizing the generated spatial affordances in one or more robotics applications directed to the object. Various implementations relate to applying vision data as input to a trained machine learning model, processing the vision data using the trained machine learning model to generate output defining one or more spatial affordances for an object captured by the vision data, and controlling one or more actuators of a robot based on the generated output. Various implementations additionally or alternatively relate to training such a machine learning model.
Sensorized Robotic Gripping Device
A robotic gripping device is provided. The robotic gripping device includes a palm and a plurality of digits coupled to the palm. The robotic gripping device also includes a time-of-flight sensor arranged on the palm such that the time-of-flight sensor is configured to generate time-of-flight distance data in a direction between the plurality of digits. The robotic gripping device additionally includes an infrared camera, including an infrared illumination source, where the infrared camera is arranged on the palm such that the infrared camera is configured to generate grayscale image data in the direction between the plurality of digits.